Test Report: KVM_Linux_crio 16761

                    
                      ca7e0fd59d4571f4bf5c8ef52ccb5634a88f3699:2023-06-26:29886
                    
                

Test fail (26/292)

Order failed test Duration
25 TestAddons/parallel/Ingress 157.47
36 TestAddons/StoppedEnableDisable 140.22
151 TestIngressAddonLegacy/serial/ValidateIngressAddons 164.26
199 TestMultiNode/serial/PingHostFrom2Pods 3.01
205 TestMultiNode/serial/RestartKeepsNodes 684.18
207 TestMultiNode/serial/StopMultiNode 143.67
214 TestPreload 281.41
220 TestRunningBinaryUpgrade 157.55
257 TestStoppedBinaryUpgrade/Upgrade 273.78
261 TestNoKubernetes/serial/StartNoArgs 75.37
266 TestStartStop/group/old-k8s-version/serial/Stop 140.73
276 TestStartStop/group/no-preload/serial/Stop 140.29
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 9.31
281 TestStartStop/group/embed-certs/serial/Stop 140.04
284 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.91
285 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 9.31
287 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 9.31
289 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 9.31
291 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.29
292 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.32
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.45
294 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.5
295 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 463.96
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 175.41
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 210.29
298 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 210.53
x
+
TestAddons/parallel/Ingress (157.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-118062 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-118062 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-118062 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [db3ca141-9ce1-4c7a-bc14-61d87f501c0b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [db3ca141-9ce1-4c7a-bc14-61d87f501c0b] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.012615455s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-118062 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.900657994s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-118062 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-118062 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.020223497s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.92
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-118062 addons disable ingress-dns --alsologtostderr -v=1: (1.605169567s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-118062 addons disable ingress --alsologtostderr -v=1: (7.540150809s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-118062 -n addons-118062
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-118062 logs -n 25: (1.202721921s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:35 UTC |                     |
	|         | -p download-only-081510        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC |                     |
	|         | -p download-only-081510        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC | 26 Jun 23 19:36 UTC |
	| delete  | -p download-only-081510        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC | 26 Jun 23 19:36 UTC |
	| delete  | -p download-only-081510        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC | 26 Jun 23 19:36 UTC |
	| start   | --download-only -p             | binary-mirror-462503 | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC |                     |
	|         | binary-mirror-462503           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39443         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-462503        | binary-mirror-462503 | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC | 26 Jun 23 19:36 UTC |
	| start   | -p addons-118062               | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC | 26 Jun 23 19:39 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	|         | -p addons-118062               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-118062 addons           | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	|         | addons-118062                  |                      |         |         |                     |                     |
	| ip      | addons-118062 ip               | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	| addons  | addons-118062 addons disable   | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	|         | addons-118062                  |                      |         |         |                     |                     |
	| ssh     | addons-118062 ssh curl -s      | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-118062 addons disable   | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:39 UTC | 26 Jun 23 19:39 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-118062 addons           | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:40 UTC | 26 Jun 23 19:41 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-118062 addons           | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:41 UTC | 26 Jun 23 19:41 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-118062 ip               | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:41 UTC | 26 Jun 23 19:41 UTC |
	| addons  | addons-118062 addons disable   | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:41 UTC | 26 Jun 23 19:41 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-118062 addons disable   | addons-118062        | jenkins | v1.30.1 | 26 Jun 23 19:41 UTC | 26 Jun 23 19:41 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 19:36:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 19:36:23.400505   14846 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:36:23.400650   14846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:36:23.400663   14846 out.go:309] Setting ErrFile to fd 2...
	I0626 19:36:23.400670   14846 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:36:23.400796   14846 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 19:36:23.401446   14846 out.go:303] Setting JSON to false
	I0626 19:36:23.402240   14846 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1130,"bootTime":1687807053,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:36:23.402297   14846 start.go:137] virtualization: kvm guest
	I0626 19:36:23.404750   14846 out.go:177] * [addons-118062] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:36:23.406442   14846 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 19:36:23.406495   14846 notify.go:220] Checking for updates...
	I0626 19:36:23.407998   14846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:36:23.410107   14846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:36:23.411624   14846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:36:23.413027   14846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 19:36:23.414412   14846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 19:36:23.416265   14846 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:36:23.448125   14846 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 19:36:23.449581   14846 start.go:297] selected driver: kvm2
	I0626 19:36:23.449593   14846 start.go:954] validating driver "kvm2" against <nil>
	I0626 19:36:23.449604   14846 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 19:36:23.450237   14846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:36:23.450302   14846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 19:36:23.464098   14846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 19:36:23.464154   14846 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 19:36:23.464346   14846 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 19:36:23.464370   14846 cni.go:84] Creating CNI manager for ""
	I0626 19:36:23.464381   14846 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:36:23.464389   14846 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0626 19:36:23.464402   14846 start_flags.go:319] config:
	{Name:addons-118062 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-118062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:36:23.464520   14846 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:36:23.467198   14846 out.go:177] * Starting control plane node addons-118062 in cluster addons-118062
	I0626 19:36:23.468961   14846 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:36:23.469000   14846 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 19:36:23.469022   14846 cache.go:57] Caching tarball of preloaded images
	I0626 19:36:23.469129   14846 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 19:36:23.469141   14846 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 19:36:23.469587   14846 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/config.json ...
	I0626 19:36:23.469617   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/config.json: {Name:mk58a6ddeea65bb6e062070f6d7165d54d7140a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:23.469778   14846 start.go:365] acquiring machines lock for addons-118062: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 19:36:23.469835   14846 start.go:369] acquired machines lock for "addons-118062" in 39.068µs
	I0626 19:36:23.469857   14846 start.go:93] Provisioning new machine with config: &{Name:addons-118062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:addons-118062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 19:36:23.469944   14846 start.go:125] createHost starting for "" (driver="kvm2")
	I0626 19:36:23.472608   14846 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0626 19:36:23.472731   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:36:23.472769   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:36:23.486450   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38899
	I0626 19:36:23.486862   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:36:23.487417   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:36:23.487439   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:36:23.487774   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:36:23.487943   14846 main.go:141] libmachine: (addons-118062) Calling .GetMachineName
	I0626 19:36:23.488082   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:23.488235   14846 start.go:159] libmachine.API.Create for "addons-118062" (driver="kvm2")
	I0626 19:36:23.488259   14846 client.go:168] LocalClient.Create starting
	I0626 19:36:23.488291   14846 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem
	I0626 19:36:23.762811   14846 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem
	I0626 19:36:23.857154   14846 main.go:141] libmachine: Running pre-create checks...
	I0626 19:36:23.857179   14846 main.go:141] libmachine: (addons-118062) Calling .PreCreateCheck
	I0626 19:36:23.857691   14846 main.go:141] libmachine: (addons-118062) Calling .GetConfigRaw
	I0626 19:36:23.858119   14846 main.go:141] libmachine: Creating machine...
	I0626 19:36:23.858135   14846 main.go:141] libmachine: (addons-118062) Calling .Create
	I0626 19:36:23.858270   14846 main.go:141] libmachine: (addons-118062) Creating KVM machine...
	I0626 19:36:23.859382   14846 main.go:141] libmachine: (addons-118062) DBG | found existing default KVM network
	I0626 19:36:23.860096   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:23.859977   14868 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d790}
	I0626 19:36:23.865877   14846 main.go:141] libmachine: (addons-118062) DBG | trying to create private KVM network mk-addons-118062 192.168.39.0/24...
	I0626 19:36:23.933193   14846 main.go:141] libmachine: (addons-118062) DBG | private KVM network mk-addons-118062 192.168.39.0/24 created
	I0626 19:36:23.933224   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:23.933149   14868 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:36:23.933261   14846 main.go:141] libmachine: (addons-118062) Setting up store path in /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062 ...
	I0626 19:36:23.933289   14846 main.go:141] libmachine: (addons-118062) Building disk image from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 19:36:23.933319   14846 main.go:141] libmachine: (addons-118062) Downloading /home/jenkins/minikube-integration/16761-7242/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso...
	I0626 19:36:24.139736   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:24.139609   14868 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa...
	I0626 19:36:24.302696   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:24.302581   14868 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/addons-118062.rawdisk...
	I0626 19:36:24.302734   14846 main.go:141] libmachine: (addons-118062) DBG | Writing magic tar header
	I0626 19:36:24.302749   14846 main.go:141] libmachine: (addons-118062) DBG | Writing SSH key tar header
	I0626 19:36:24.302768   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:24.302680   14868 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062 ...
	I0626 19:36:24.302786   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062
	I0626 19:36:24.302806   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines
	I0626 19:36:24.302827   14846 main.go:141] libmachine: (addons-118062) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062 (perms=drwx------)
	I0626 19:36:24.302845   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:36:24.302859   14846 main.go:141] libmachine: (addons-118062) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines (perms=drwxr-xr-x)
	I0626 19:36:24.302879   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242
	I0626 19:36:24.302897   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0626 19:36:24.302903   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home/jenkins
	I0626 19:36:24.302919   14846 main.go:141] libmachine: (addons-118062) DBG | Checking permissions on dir: /home
	I0626 19:36:24.302932   14846 main.go:141] libmachine: (addons-118062) DBG | Skipping /home - not owner
	I0626 19:36:24.302945   14846 main.go:141] libmachine: (addons-118062) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube (perms=drwxr-xr-x)
	I0626 19:36:24.302963   14846 main.go:141] libmachine: (addons-118062) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242 (perms=drwxrwxr-x)
	I0626 19:36:24.302975   14846 main.go:141] libmachine: (addons-118062) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0626 19:36:24.302997   14846 main.go:141] libmachine: (addons-118062) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0626 19:36:24.303009   14846 main.go:141] libmachine: (addons-118062) Creating domain...
	I0626 19:36:24.303987   14846 main.go:141] libmachine: (addons-118062) define libvirt domain using xml: 
	I0626 19:36:24.304016   14846 main.go:141] libmachine: (addons-118062) <domain type='kvm'>
	I0626 19:36:24.304028   14846 main.go:141] libmachine: (addons-118062)   <name>addons-118062</name>
	I0626 19:36:24.304037   14846 main.go:141] libmachine: (addons-118062)   <memory unit='MiB'>4000</memory>
	I0626 19:36:24.304047   14846 main.go:141] libmachine: (addons-118062)   <vcpu>2</vcpu>
	I0626 19:36:24.304057   14846 main.go:141] libmachine: (addons-118062)   <features>
	I0626 19:36:24.304063   14846 main.go:141] libmachine: (addons-118062)     <acpi/>
	I0626 19:36:24.304071   14846 main.go:141] libmachine: (addons-118062)     <apic/>
	I0626 19:36:24.304077   14846 main.go:141] libmachine: (addons-118062)     <pae/>
	I0626 19:36:24.304083   14846 main.go:141] libmachine: (addons-118062)     
	I0626 19:36:24.304090   14846 main.go:141] libmachine: (addons-118062)   </features>
	I0626 19:36:24.304100   14846 main.go:141] libmachine: (addons-118062)   <cpu mode='host-passthrough'>
	I0626 19:36:24.304111   14846 main.go:141] libmachine: (addons-118062)   
	I0626 19:36:24.304126   14846 main.go:141] libmachine: (addons-118062)   </cpu>
	I0626 19:36:24.304180   14846 main.go:141] libmachine: (addons-118062)   <os>
	I0626 19:36:24.304205   14846 main.go:141] libmachine: (addons-118062)     <type>hvm</type>
	I0626 19:36:24.304231   14846 main.go:141] libmachine: (addons-118062)     <boot dev='cdrom'/>
	I0626 19:36:24.304254   14846 main.go:141] libmachine: (addons-118062)     <boot dev='hd'/>
	I0626 19:36:24.304265   14846 main.go:141] libmachine: (addons-118062)     <bootmenu enable='no'/>
	I0626 19:36:24.304284   14846 main.go:141] libmachine: (addons-118062)   </os>
	I0626 19:36:24.304297   14846 main.go:141] libmachine: (addons-118062)   <devices>
	I0626 19:36:24.304306   14846 main.go:141] libmachine: (addons-118062)     <disk type='file' device='cdrom'>
	I0626 19:36:24.304318   14846 main.go:141] libmachine: (addons-118062)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/boot2docker.iso'/>
	I0626 19:36:24.304327   14846 main.go:141] libmachine: (addons-118062)       <target dev='hdc' bus='scsi'/>
	I0626 19:36:24.304344   14846 main.go:141] libmachine: (addons-118062)       <readonly/>
	I0626 19:36:24.304358   14846 main.go:141] libmachine: (addons-118062)     </disk>
	I0626 19:36:24.304373   14846 main.go:141] libmachine: (addons-118062)     <disk type='file' device='disk'>
	I0626 19:36:24.304386   14846 main.go:141] libmachine: (addons-118062)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0626 19:36:24.304398   14846 main.go:141] libmachine: (addons-118062)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/addons-118062.rawdisk'/>
	I0626 19:36:24.304406   14846 main.go:141] libmachine: (addons-118062)       <target dev='hda' bus='virtio'/>
	I0626 19:36:24.304418   14846 main.go:141] libmachine: (addons-118062)     </disk>
	I0626 19:36:24.304430   14846 main.go:141] libmachine: (addons-118062)     <interface type='network'>
	I0626 19:36:24.304444   14846 main.go:141] libmachine: (addons-118062)       <source network='mk-addons-118062'/>
	I0626 19:36:24.304463   14846 main.go:141] libmachine: (addons-118062)       <model type='virtio'/>
	I0626 19:36:24.304481   14846 main.go:141] libmachine: (addons-118062)     </interface>
	I0626 19:36:24.304497   14846 main.go:141] libmachine: (addons-118062)     <interface type='network'>
	I0626 19:36:24.304509   14846 main.go:141] libmachine: (addons-118062)       <source network='default'/>
	I0626 19:36:24.304522   14846 main.go:141] libmachine: (addons-118062)       <model type='virtio'/>
	I0626 19:36:24.304533   14846 main.go:141] libmachine: (addons-118062)     </interface>
	I0626 19:36:24.304544   14846 main.go:141] libmachine: (addons-118062)     <serial type='pty'>
	I0626 19:36:24.304559   14846 main.go:141] libmachine: (addons-118062)       <target port='0'/>
	I0626 19:36:24.304573   14846 main.go:141] libmachine: (addons-118062)     </serial>
	I0626 19:36:24.304585   14846 main.go:141] libmachine: (addons-118062)     <console type='pty'>
	I0626 19:36:24.304598   14846 main.go:141] libmachine: (addons-118062)       <target type='serial' port='0'/>
	I0626 19:36:24.304606   14846 main.go:141] libmachine: (addons-118062)     </console>
	I0626 19:36:24.304619   14846 main.go:141] libmachine: (addons-118062)     <rng model='virtio'>
	I0626 19:36:24.304635   14846 main.go:141] libmachine: (addons-118062)       <backend model='random'>/dev/random</backend>
	I0626 19:36:24.304648   14846 main.go:141] libmachine: (addons-118062)     </rng>
	I0626 19:36:24.304661   14846 main.go:141] libmachine: (addons-118062)     
	I0626 19:36:24.304676   14846 main.go:141] libmachine: (addons-118062)     
	I0626 19:36:24.304688   14846 main.go:141] libmachine: (addons-118062)   </devices>
	I0626 19:36:24.304707   14846 main.go:141] libmachine: (addons-118062) </domain>
	I0626 19:36:24.304727   14846 main.go:141] libmachine: (addons-118062) 
	I0626 19:36:24.310160   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:b5:dc:47 in network default
	I0626 19:36:24.310670   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:24.310695   14846 main.go:141] libmachine: (addons-118062) Ensuring networks are active...
	I0626 19:36:24.311259   14846 main.go:141] libmachine: (addons-118062) Ensuring network default is active
	I0626 19:36:24.311541   14846 main.go:141] libmachine: (addons-118062) Ensuring network mk-addons-118062 is active
	I0626 19:36:24.311976   14846 main.go:141] libmachine: (addons-118062) Getting domain xml...
	I0626 19:36:24.312641   14846 main.go:141] libmachine: (addons-118062) Creating domain...
	I0626 19:36:25.726838   14846 main.go:141] libmachine: (addons-118062) Waiting to get IP...
	I0626 19:36:25.727559   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:25.727957   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:25.728038   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:25.727959   14868 retry.go:31] will retry after 210.021906ms: waiting for machine to come up
	I0626 19:36:25.939427   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:25.939885   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:25.939906   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:25.939833   14868 retry.go:31] will retry after 299.268848ms: waiting for machine to come up
	I0626 19:36:26.240179   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:26.240604   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:26.240635   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:26.240549   14868 retry.go:31] will retry after 304.338622ms: waiting for machine to come up
	I0626 19:36:26.546025   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:26.546428   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:26.546449   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:26.546389   14868 retry.go:31] will retry after 568.598322ms: waiting for machine to come up
	I0626 19:36:27.115931   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:27.116368   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:27.116398   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:27.116308   14868 retry.go:31] will retry after 639.894291ms: waiting for machine to come up
	I0626 19:36:27.758168   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:27.758552   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:27.758579   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:27.758514   14868 retry.go:31] will retry after 682.284675ms: waiting for machine to come up
	I0626 19:36:28.442264   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:28.442600   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:28.442643   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:28.442550   14868 retry.go:31] will retry after 800.418998ms: waiting for machine to come up
	I0626 19:36:29.244773   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:29.245164   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:29.245193   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:29.245130   14868 retry.go:31] will retry after 1.14580344s: waiting for machine to come up
	I0626 19:36:30.392423   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:30.392865   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:30.392884   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:30.392840   14868 retry.go:31] will retry after 1.444061813s: waiting for machine to come up
	I0626 19:36:31.838128   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:31.838508   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:31.838534   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:31.838462   14868 retry.go:31] will retry after 2.305015267s: waiting for machine to come up
	I0626 19:36:34.144961   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:34.145468   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:34.145502   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:34.145393   14868 retry.go:31] will retry after 2.229652934s: waiting for machine to come up
	I0626 19:36:36.377840   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:36.378164   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:36.378186   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:36.378134   14868 retry.go:31] will retry after 3.125155693s: waiting for machine to come up
	I0626 19:36:39.504511   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:39.504939   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:39.505012   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:39.504892   14868 retry.go:31] will retry after 3.448844159s: waiting for machine to come up
	I0626 19:36:42.957462   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:42.957879   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find current IP address of domain addons-118062 in network mk-addons-118062
	I0626 19:36:42.957905   14846 main.go:141] libmachine: (addons-118062) DBG | I0626 19:36:42.957853   14868 retry.go:31] will retry after 5.61116464s: waiting for machine to come up
	I0626 19:36:48.570878   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.571311   14846 main.go:141] libmachine: (addons-118062) Found IP for machine: 192.168.39.92
	I0626 19:36:48.571333   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has current primary IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.571343   14846 main.go:141] libmachine: (addons-118062) Reserving static IP address...
	I0626 19:36:48.571698   14846 main.go:141] libmachine: (addons-118062) DBG | unable to find host DHCP lease matching {name: "addons-118062", mac: "52:54:00:8e:fd:20", ip: "192.168.39.92"} in network mk-addons-118062
	I0626 19:36:48.642630   14846 main.go:141] libmachine: (addons-118062) Reserved static IP address: 192.168.39.92
	I0626 19:36:48.642657   14846 main.go:141] libmachine: (addons-118062) DBG | Getting to WaitForSSH function...
	I0626 19:36:48.642675   14846 main.go:141] libmachine: (addons-118062) Waiting for SSH to be available...
	I0626 19:36:48.645264   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.645903   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:48.645931   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.646108   14846 main.go:141] libmachine: (addons-118062) DBG | Using SSH client type: external
	I0626 19:36:48.646137   14846 main.go:141] libmachine: (addons-118062) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa (-rw-------)
	I0626 19:36:48.646172   14846 main.go:141] libmachine: (addons-118062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 19:36:48.646183   14846 main.go:141] libmachine: (addons-118062) DBG | About to run SSH command:
	I0626 19:36:48.646191   14846 main.go:141] libmachine: (addons-118062) DBG | exit 0
	I0626 19:36:48.753278   14846 main.go:141] libmachine: (addons-118062) DBG | SSH cmd err, output: <nil>: 
	I0626 19:36:48.753592   14846 main.go:141] libmachine: (addons-118062) KVM machine creation complete!
	I0626 19:36:48.753931   14846 main.go:141] libmachine: (addons-118062) Calling .GetConfigRaw
	I0626 19:36:48.754500   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:48.754679   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:48.754845   14846 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0626 19:36:48.754860   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:36:48.756061   14846 main.go:141] libmachine: Detecting operating system of created instance...
	I0626 19:36:48.756082   14846 main.go:141] libmachine: Waiting for SSH to be available...
	I0626 19:36:48.756089   14846 main.go:141] libmachine: Getting to WaitForSSH function...
	I0626 19:36:48.756104   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:48.758157   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.758502   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:48.758536   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.758677   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:48.758836   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:48.759043   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:48.759188   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:48.759357   14846 main.go:141] libmachine: Using SSH client type: native
	I0626 19:36:48.759768   14846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0626 19:36:48.759781   14846 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0626 19:36:48.888574   14846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:36:48.888597   14846 main.go:141] libmachine: Detecting the provisioner...
	I0626 19:36:48.888605   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:48.891323   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.891671   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:48.891705   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:48.891844   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:48.892029   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:48.892165   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:48.892298   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:48.892490   14846 main.go:141] libmachine: Using SSH client type: native
	I0626 19:36:48.892858   14846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0626 19:36:48.892870   14846 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0626 19:36:49.022100   14846 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2e95ab-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0626 19:36:49.022153   14846 main.go:141] libmachine: found compatible host: buildroot
	I0626 19:36:49.022159   14846 main.go:141] libmachine: Provisioning with buildroot...
	I0626 19:36:49.022166   14846 main.go:141] libmachine: (addons-118062) Calling .GetMachineName
	I0626 19:36:49.022422   14846 buildroot.go:166] provisioning hostname "addons-118062"
	I0626 19:36:49.022449   14846 main.go:141] libmachine: (addons-118062) Calling .GetMachineName
	I0626 19:36:49.022615   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:49.025164   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.025564   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.025593   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.025694   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:49.025887   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.026017   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.026158   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:49.026291   14846 main.go:141] libmachine: Using SSH client type: native
	I0626 19:36:49.026673   14846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0626 19:36:49.026685   14846 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-118062 && echo "addons-118062" | sudo tee /etc/hostname
	I0626 19:36:49.165847   14846 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-118062
	
	I0626 19:36:49.165871   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:49.168366   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.168763   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.168799   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.168941   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:49.169122   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.169285   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.169504   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:49.169693   14846 main.go:141] libmachine: Using SSH client type: native
	I0626 19:36:49.170086   14846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0626 19:36:49.170102   14846 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-118062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-118062/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-118062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 19:36:49.305738   14846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:36:49.305793   14846 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 19:36:49.305825   14846 buildroot.go:174] setting up certificates
	I0626 19:36:49.305835   14846 provision.go:83] configureAuth start
	I0626 19:36:49.305846   14846 main.go:141] libmachine: (addons-118062) Calling .GetMachineName
	I0626 19:36:49.306167   14846 main.go:141] libmachine: (addons-118062) Calling .GetIP
	I0626 19:36:49.308805   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.309137   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.309168   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.309346   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:49.311242   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.311522   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.311550   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.311644   14846 provision.go:138] copyHostCerts
	I0626 19:36:49.311729   14846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 19:36:49.311861   14846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 19:36:49.311935   14846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 19:36:49.312019   14846 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.addons-118062 san=[192.168.39.92 192.168.39.92 localhost 127.0.0.1 minikube addons-118062]
	I0626 19:36:49.474616   14846 provision.go:172] copyRemoteCerts
	I0626 19:36:49.474666   14846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 19:36:49.474686   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:49.477138   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.477550   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.477582   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.477743   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:49.477895   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.478052   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:49.478159   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:36:49.571141   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 19:36:49.593274   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0626 19:36:49.615244   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 19:36:49.640642   14846 provision.go:86] duration metric: configureAuth took 334.778604ms
	I0626 19:36:49.640678   14846 buildroot.go:189] setting minikube options for container-runtime
	I0626 19:36:49.640927   14846 config.go:182] Loaded profile config "addons-118062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 19:36:49.641008   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:49.643417   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.643755   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.643794   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.643999   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:49.644175   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.644344   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.644573   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:49.644741   14846 main.go:141] libmachine: Using SSH client type: native
	I0626 19:36:49.645175   14846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0626 19:36:49.645193   14846 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 19:36:49.960792   14846 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 19:36:49.960821   14846 main.go:141] libmachine: Checking connection to Docker...
	I0626 19:36:49.960840   14846 main.go:141] libmachine: (addons-118062) Calling .GetURL
	I0626 19:36:49.962159   14846 main.go:141] libmachine: (addons-118062) DBG | Using libvirt version 6000000
	I0626 19:36:49.964284   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.964606   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.964635   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.964828   14846 main.go:141] libmachine: Docker is up and running!
	I0626 19:36:49.964854   14846 main.go:141] libmachine: Reticulating splines...
	I0626 19:36:49.964864   14846 client.go:171] LocalClient.Create took 26.476598159s
	I0626 19:36:49.964889   14846 start.go:167] duration metric: libmachine.API.Create for "addons-118062" took 26.476654547s
	I0626 19:36:49.964904   14846 start.go:300] post-start starting for "addons-118062" (driver="kvm2")
	I0626 19:36:49.964915   14846 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 19:36:49.964937   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:49.965155   14846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 19:36:49.965178   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:49.967768   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.968139   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:49.968171   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:49.968302   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:49.968585   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:49.968768   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:49.968945   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:36:50.063286   14846 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 19:36:50.067331   14846 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 19:36:50.067356   14846 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 19:36:50.067445   14846 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 19:36:50.067474   14846 start.go:303] post-start completed in 102.564175ms
	I0626 19:36:50.067504   14846 main.go:141] libmachine: (addons-118062) Calling .GetConfigRaw
	I0626 19:36:50.068042   14846 main.go:141] libmachine: (addons-118062) Calling .GetIP
	I0626 19:36:50.070494   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.070851   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:50.070893   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.071129   14846 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/config.json ...
	I0626 19:36:50.071335   14846 start.go:128] duration metric: createHost completed in 26.60138235s
	I0626 19:36:50.071361   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:50.073664   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.073953   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:50.073990   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.074082   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:50.074243   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:50.074397   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:50.074617   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:50.074780   14846 main.go:141] libmachine: Using SSH client type: native
	I0626 19:36:50.075159   14846 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0626 19:36:50.075171   14846 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 19:36:50.206101   14846 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687808210.191038653
	
	I0626 19:36:50.206122   14846 fix.go:206] guest clock: 1687808210.191038653
	I0626 19:36:50.206131   14846 fix.go:219] Guest: 2023-06-26 19:36:50.191038653 +0000 UTC Remote: 2023-06-26 19:36:50.071347094 +0000 UTC m=+26.703364994 (delta=119.691559ms)
	I0626 19:36:50.206173   14846 fix.go:190] guest clock delta is within tolerance: 119.691559ms
	I0626 19:36:50.206179   14846 start.go:83] releasing machines lock for "addons-118062", held for 26.736332799s
	I0626 19:36:50.206205   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:50.206576   14846 main.go:141] libmachine: (addons-118062) Calling .GetIP
	I0626 19:36:50.209190   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.209559   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:50.209583   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.209764   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:50.210325   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:50.210510   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:36:50.210613   14846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 19:36:50.210655   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:50.210751   14846 ssh_runner.go:195] Run: cat /version.json
	I0626 19:36:50.210775   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:36:50.212948   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.213292   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.213334   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:50.213385   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.213484   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:50.213658   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:50.213729   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:50.213757   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:50.213894   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:36:50.213915   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:50.214061   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:36:50.214069   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:36:50.214311   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:36:50.214499   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:36:50.328413   14846 ssh_runner.go:195] Run: systemctl --version
	I0626 19:36:50.334184   14846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 19:36:50.494463   14846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 19:36:50.501105   14846 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 19:36:50.501165   14846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 19:36:50.514932   14846 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 19:36:50.514953   14846 start.go:466] detecting cgroup driver to use...
	I0626 19:36:50.515002   14846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 19:36:50.528438   14846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 19:36:50.540068   14846 docker.go:196] disabling cri-docker service (if available) ...
	I0626 19:36:50.540117   14846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 19:36:50.552431   14846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 19:36:50.564858   14846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 19:36:50.671067   14846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 19:36:50.787146   14846 docker.go:212] disabling docker service ...
	I0626 19:36:50.787207   14846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 19:36:50.801506   14846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 19:36:50.813041   14846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 19:36:50.912067   14846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 19:36:51.012246   14846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 19:36:51.025617   14846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 19:36:51.042622   14846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 19:36:51.042696   14846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:36:51.051855   14846 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 19:36:51.051931   14846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:36:51.061210   14846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:36:51.070484   14846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:36:51.079775   14846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 19:36:51.089014   14846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 19:36:51.096793   14846 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 19:36:51.096854   14846 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 19:36:51.109623   14846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 19:36:51.118894   14846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 19:36:51.220793   14846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 19:36:51.384936   14846 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 19:36:51.385017   14846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 19:36:51.389823   14846 start.go:534] Will wait 60s for crictl version
	I0626 19:36:51.389898   14846 ssh_runner.go:195] Run: which crictl
	I0626 19:36:51.394144   14846 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 19:36:51.428959   14846 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 19:36:51.429084   14846 ssh_runner.go:195] Run: crio --version
	I0626 19:36:51.476369   14846 ssh_runner.go:195] Run: crio --version
	I0626 19:36:51.525773   14846 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 19:36:51.527273   14846 main.go:141] libmachine: (addons-118062) Calling .GetIP
	I0626 19:36:51.529878   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:51.530149   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:36:51.530186   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:36:51.530401   14846 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 19:36:51.534796   14846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 19:36:51.549203   14846 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:36:51.549262   14846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 19:36:51.575693   14846 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 19:36:51.575760   14846 ssh_runner.go:195] Run: which lz4
	I0626 19:36:51.579700   14846 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 19:36:51.583930   14846 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 19:36:51.583955   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 19:36:53.302642   14846 crio.go:444] Took 1.722961 seconds to copy over tarball
	I0626 19:36:53.302709   14846 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 19:36:56.205102   14846 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902365593s)
	I0626 19:36:56.205132   14846 crio.go:451] Took 2.902464 seconds to extract the tarball
	I0626 19:36:56.205143   14846 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 19:36:56.246471   14846 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 19:36:56.296076   14846 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 19:36:56.296104   14846 cache_images.go:84] Images are preloaded, skipping loading
	I0626 19:36:56.296197   14846 ssh_runner.go:195] Run: crio config
	I0626 19:36:56.348489   14846 cni.go:84] Creating CNI manager for ""
	I0626 19:36:56.348512   14846 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:36:56.348522   14846 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 19:36:56.348537   14846 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-118062 NodeName:addons-118062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 19:36:56.348654   14846 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-118062"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 19:36:56.348714   14846 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-118062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-118062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 19:36:56.348772   14846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 19:36:56.358707   14846 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 19:36:56.358765   14846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 19:36:56.367808   14846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0626 19:36:56.383117   14846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 19:36:56.399348   14846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0626 19:36:56.414642   14846 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0626 19:36:56.418271   14846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 19:36:56.429894   14846 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062 for IP: 192.168.39.92
	I0626 19:36:56.429934   14846 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:56.430060   14846 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 19:36:56.594809   14846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt ...
	I0626 19:36:56.594840   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt: {Name:mkd15a27913de4495dfa77682bd2d8ec18a9975a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:56.595020   14846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key ...
	I0626 19:36:56.595031   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key: {Name:mk9b5a79514ee6500f7839c0e5d74f66f849842d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:56.595109   14846 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 19:36:56.819837   14846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt ...
	I0626 19:36:56.819865   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt: {Name:mk829b164a16e793e010f9c32e4f29f4d283ce14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:56.820027   14846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key ...
	I0626 19:36:56.820037   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key: {Name:mke52aa9cfa3b5e93333853466728584d0669301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:56.820138   14846 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.key
	I0626 19:36:56.820151   14846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt with IP's: []
	I0626 19:36:57.136163   14846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt ...
	I0626 19:36:57.136191   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: {Name:mkbd97a4a9f9155aa1625ecc79d946ba3d1c1728 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:57.136361   14846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.key ...
	I0626 19:36:57.136375   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.key: {Name:mk699895e794d4794d444c223861ae0342f30894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:57.136458   14846 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.key.08611cbb
	I0626 19:36:57.136479   14846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.crt.08611cbb with IP's: [192.168.39.92 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 19:36:57.257114   14846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.crt.08611cbb ...
	I0626 19:36:57.257146   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.crt.08611cbb: {Name:mkec243e5216dd65ab60d9725dd427068440b0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:57.257312   14846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.key.08611cbb ...
	I0626 19:36:57.257330   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.key.08611cbb: {Name:mk3cf9e5df36a1326f3d992ab0d1ef0bd493f48c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:57.257452   14846 certs.go:337] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.crt.08611cbb -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.crt
	I0626 19:36:57.257551   14846 certs.go:341] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.key.08611cbb -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.key
	I0626 19:36:57.257641   14846 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.key
	I0626 19:36:57.257664   14846 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.crt with IP's: []
	I0626 19:36:57.499463   14846 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.crt ...
	I0626 19:36:57.499500   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.crt: {Name:mk5b7ad371ea0e615b1913329b300336f563e069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:57.499674   14846 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.key ...
	I0626 19:36:57.499688   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.key: {Name:mk5c2a77f1641db350379fa3ca1baebfcf2aecb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:36:57.499992   14846 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 19:36:57.500044   14846 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 19:36:57.500075   14846 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 19:36:57.500100   14846 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 19:36:57.500618   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 19:36:57.524041   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 19:36:57.546717   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 19:36:57.568717   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 19:36:57.589811   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 19:36:57.610882   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 19:36:57.631881   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 19:36:57.654076   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 19:36:57.675787   14846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 19:36:57.697164   14846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 19:36:57.712502   14846 ssh_runner.go:195] Run: openssl version
	I0626 19:36:57.717810   14846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 19:36:57.727757   14846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:36:57.732001   14846 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:36:57.732048   14846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:36:57.737258   14846 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 19:36:57.747408   14846 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 19:36:57.751291   14846 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 19:36:57.751331   14846 kubeadm.go:404] StartCluster: {Name:addons-118062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.3 ClusterName:addons-118062 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:36:57.751400   14846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 19:36:57.751446   14846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 19:36:57.780459   14846 cri.go:89] found id: ""
	I0626 19:36:57.780521   14846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 19:36:57.789791   14846 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 19:36:57.798890   14846 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 19:36:57.809992   14846 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 19:36:57.810028   14846 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 19:36:57.861811   14846 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 19:36:57.861864   14846 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 19:36:57.991295   14846 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 19:36:57.991427   14846 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 19:36:57.991536   14846 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 19:36:58.170112   14846 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 19:36:58.264158   14846 out.go:204]   - Generating certificates and keys ...
	I0626 19:36:58.264280   14846 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 19:36:58.264455   14846 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 19:36:58.364014   14846 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 19:36:58.417699   14846 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 19:36:58.575931   14846 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 19:36:58.718296   14846 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 19:36:58.827660   14846 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 19:36:58.827854   14846 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-118062 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0626 19:36:58.967704   14846 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 19:36:58.967899   14846 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-118062 localhost] and IPs [192.168.39.92 127.0.0.1 ::1]
	I0626 19:36:59.103922   14846 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 19:36:59.173287   14846 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 19:36:59.390983   14846 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 19:36:59.391090   14846 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 19:36:59.626899   14846 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 19:36:59.807756   14846 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 19:37:00.006579   14846 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 19:37:00.062005   14846 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 19:37:00.077472   14846 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 19:37:00.078625   14846 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 19:37:00.078712   14846 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 19:37:00.195259   14846 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 19:37:00.197337   14846 out.go:204]   - Booting up control plane ...
	I0626 19:37:00.197464   14846 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 19:37:00.200583   14846 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 19:37:00.201729   14846 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 19:37:00.205142   14846 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 19:37:00.205842   14846 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 19:37:08.707140   14846 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503082 seconds
	I0626 19:37:08.707271   14846 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 19:37:08.727093   14846 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 19:37:09.280005   14846 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 19:37:09.280256   14846 kubeadm.go:322] [mark-control-plane] Marking the node addons-118062 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 19:37:09.795791   14846 kubeadm.go:322] [bootstrap-token] Using token: i0aqbo.wua12ootocn57kx4
	I0626 19:37:09.797355   14846 out.go:204]   - Configuring RBAC rules ...
	I0626 19:37:09.797513   14846 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 19:37:09.804755   14846 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 19:37:09.817321   14846 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 19:37:09.821014   14846 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 19:37:09.825035   14846 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 19:37:09.832667   14846 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 19:37:09.850419   14846 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 19:37:10.096664   14846 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 19:37:10.228047   14846 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 19:37:10.229052   14846 kubeadm.go:322] 
	I0626 19:37:10.229128   14846 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 19:37:10.229138   14846 kubeadm.go:322] 
	I0626 19:37:10.229222   14846 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 19:37:10.229253   14846 kubeadm.go:322] 
	I0626 19:37:10.229297   14846 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 19:37:10.229400   14846 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 19:37:10.229485   14846 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 19:37:10.229496   14846 kubeadm.go:322] 
	I0626 19:37:10.229562   14846 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 19:37:10.229579   14846 kubeadm.go:322] 
	I0626 19:37:10.229645   14846 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 19:37:10.229657   14846 kubeadm.go:322] 
	I0626 19:37:10.229715   14846 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 19:37:10.229810   14846 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 19:37:10.229907   14846 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 19:37:10.229917   14846 kubeadm.go:322] 
	I0626 19:37:10.230016   14846 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 19:37:10.230114   14846 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 19:37:10.230127   14846 kubeadm.go:322] 
	I0626 19:37:10.230234   14846 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i0aqbo.wua12ootocn57kx4 \
	I0626 19:37:10.230370   14846 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 19:37:10.230429   14846 kubeadm.go:322] 	--control-plane 
	I0626 19:37:10.230438   14846 kubeadm.go:322] 
	I0626 19:37:10.230548   14846 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 19:37:10.230559   14846 kubeadm.go:322] 
	I0626 19:37:10.230663   14846 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i0aqbo.wua12ootocn57kx4 \
	I0626 19:37:10.230799   14846 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 19:37:10.231344   14846 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 19:37:10.231377   14846 cni.go:84] Creating CNI manager for ""
	I0626 19:37:10.231412   14846 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:37:10.234209   14846 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 19:37:10.235950   14846 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 19:37:10.246048   14846 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 19:37:10.292708   14846 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 19:37:10.292765   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:10.292790   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=addons-118062 minikube.k8s.io/updated_at=2023_06_26T19_37_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:10.350731   14846 ops.go:34] apiserver oom_adj: -16
	I0626 19:37:10.462462   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:11.125316   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:11.625460   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:12.125412   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:12.624765   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:13.125539   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:13.625668   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:14.124734   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:14.624940   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:15.125031   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:15.624844   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:16.124811   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:16.625721   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:17.124707   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:17.625417   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:18.125471   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:18.625234   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:19.125267   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:19.625337   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:20.125255   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:20.625591   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:21.125317   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:21.625294   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:22.124983   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:22.625393   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:23.125663   14846 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:37:23.240036   14846 kubeadm.go:1081] duration metric: took 12.947326909s to wait for elevateKubeSystemPrivileges.
	I0626 19:37:23.240072   14846 kubeadm.go:406] StartCluster complete in 25.48874388s
	I0626 19:37:23.240090   14846 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:37:23.240217   14846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:37:23.240741   14846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:37:23.240985   14846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 19:37:23.241022   14846 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0626 19:37:23.241131   14846 addons.go:66] Setting volumesnapshots=true in profile "addons-118062"
	I0626 19:37:23.241144   14846 addons.go:66] Setting ingress-dns=true in profile "addons-118062"
	I0626 19:37:23.241157   14846 addons.go:228] Setting addon volumesnapshots=true in "addons-118062"
	I0626 19:37:23.241167   14846 addons.go:228] Setting addon ingress-dns=true in "addons-118062"
	I0626 19:37:23.241163   14846 addons.go:66] Setting cloud-spanner=true in profile "addons-118062"
	I0626 19:37:23.241199   14846 addons.go:228] Setting addon cloud-spanner=true in "addons-118062"
	I0626 19:37:23.241192   14846 addons.go:66] Setting metrics-server=true in profile "addons-118062"
	I0626 19:37:23.241216   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241222   14846 addons.go:66] Setting storage-provisioner=true in profile "addons-118062"
	I0626 19:37:23.241232   14846 addons.go:228] Setting addon storage-provisioner=true in "addons-118062"
	I0626 19:37:23.241240   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241243   14846 addons.go:228] Setting addon metrics-server=true in "addons-118062"
	I0626 19:37:23.241243   14846 config.go:182] Loaded profile config "addons-118062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 19:37:23.241257   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241247   14846 addons.go:66] Setting gcp-auth=true in profile "addons-118062"
	I0626 19:37:23.241234   14846 addons.go:66] Setting inspektor-gadget=true in profile "addons-118062"
	I0626 19:37:23.241281   14846 mustload.go:65] Loading cluster: addons-118062
	I0626 19:37:23.241283   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241293   14846 addons.go:228] Setting addon inspektor-gadget=true in "addons-118062"
	I0626 19:37:23.241295   14846 addons.go:66] Setting default-storageclass=true in profile "addons-118062"
	I0626 19:37:23.241308   14846 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-118062"
	I0626 19:37:23.241218   14846 addons.go:66] Setting registry=true in profile "addons-118062"
	I0626 19:37:23.241316   14846 addons.go:66] Setting helm-tiller=true in profile "addons-118062"
	I0626 19:37:23.241675   14846 addons.go:228] Setting addon helm-tiller=true in "addons-118062"
	I0626 19:37:23.241735   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241770   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241133   14846 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-118062"
	I0626 19:37:23.241325   14846 addons.go:228] Setting addon registry=true in "addons-118062"
	I0626 19:37:23.242013   14846 addons.go:66] Setting ingress=true in profile "addons-118062"
	I0626 19:37:23.242034   14846 addons.go:228] Setting addon ingress=true in "addons-118062"
	I0626 19:37:23.242085   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.242198   14846 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-118062"
	I0626 19:37:23.242307   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.241211   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.243417   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.243716   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.243734   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.243771   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.243779   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.243799   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.243845   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.243881   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.243904   14846 config.go:182] Loaded profile config "addons-118062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 19:37:23.243792   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244013   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244030   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244037   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244054   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.243738   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244083   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244258   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244299   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244584   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244614   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244622   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244653   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244662   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244676   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.244796   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.244850   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.264130   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0626 19:37:23.264137   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0626 19:37:23.264632   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.264734   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.265151   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.265173   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.265514   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.265952   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.265989   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.266184   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I0626 19:37:23.266268   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.266289   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.266667   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.267256   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.267295   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.267426   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.267889   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.267908   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.268243   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.268618   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0626 19:37:23.268789   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.268832   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.269231   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.269719   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.269737   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.270539   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.271025   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.271060   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.271636   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0626 19:37:23.272010   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.272445   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.272466   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.272764   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.273297   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.273343   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.283904   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0626 19:37:23.284075   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0626 19:37:23.284351   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.284747   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.285303   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.285319   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.285738   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.286131   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.286149   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.286551   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.286607   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0626 19:37:23.287204   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.287239   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.287374   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.287955   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.288006   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.288205   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.288217   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.288287   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0626 19:37:23.288540   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.288670   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0626 19:37:23.288826   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.288827   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.289041   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.289615   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.289630   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.290017   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.290027   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.290098   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.290292   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.290467   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.290595   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.295162   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0626 19:37:23.295195   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.295162   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.295584   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.295618   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.297717   14846 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0626 19:37:23.295749   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.297067   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.297337   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0626 19:37:23.299284   14846 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0626 19:37:23.299298   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0626 19:37:23.299317   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.300825   14846 out.go:177]   - Using image docker.io/registry:2.8.1
	I0626 19:37:23.299804   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.300033   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.302225   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.303600   14846 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0626 19:37:23.302328   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.302772   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.302917   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.303097   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.305099   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.305123   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.305183   14846 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0626 19:37:23.305195   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0626 19:37:23.305211   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.305212   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.305537   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0626 19:37:23.305679   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.305743   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.306264   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.306308   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.306432   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.306494   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.306549   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.306624   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.307345   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.307378   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.307688   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.307826   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.308963   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0626 19:37:23.309405   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.309452   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.309713   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.311477   14846 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0626 19:37:23.310014   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.310268   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.310316   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.310526   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.312614   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.312663   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.312787   14846 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0626 19:37:23.312806   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0626 19:37:23.312822   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.313510   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.313569   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.315018   14846 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	I0626 19:37:23.313769   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.313920   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.315689   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.316216   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.316240   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.316241   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.316288   14846 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0626 19:37:23.316297   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0626 19:37:23.316311   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.316414   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.316428   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.316608   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.316756   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.318418   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.320458   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0626 19:37:23.319440   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.320501   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.320529   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.320045   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.321942   14846 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0626 19:37:23.321963   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0626 19:37:23.321982   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.320712   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.322317   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.322481   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.323874   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0626 19:37:23.324258   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.325146   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.325164   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.325496   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.325658   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.325717   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0626 19:37:23.325870   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.326070   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.326253   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.326275   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.326476   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.326670   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.326823   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.326836   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.327040   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.327220   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.327716   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.328808   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.328833   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.335629   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I0626 19:37:23.336675   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.337306   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.337323   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.337779   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.338300   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.338318   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.340900   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0626 19:37:23.341320   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.341871   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.341889   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.342300   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.342350   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0626 19:37:23.342672   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.342724   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.343204   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.343222   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.343497   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0626 19:37:23.343625   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.343811   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.343883   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.344296   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.344312   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.344314   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0626 19:37:23.344612   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.344675   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.344839   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.345219   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.345237   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.345577   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.345721   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.345743   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.348145   14846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.0
	I0626 19:37:23.347474   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.347680   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.350239   14846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0626 19:37:23.351814   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37003
	I0626 19:37:23.352039   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0626 19:37:23.353329   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0626 19:37:23.351956   14846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0626 19:37:23.351985   14846 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0626 19:37:23.352400   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.356151   14846 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0626 19:37:23.356165   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0626 19:37:23.356176   14846 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0626 19:37:23.356179   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.356186   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0626 19:37:23.357490   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0626 19:37:23.355196   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.356198   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.360116   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0626 19:37:23.358875   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.359072   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.359526   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0626 19:37:23.359783   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.362418   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.361452   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.362448   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.361524   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.362426   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0626 19:37:23.361920   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.363839   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0626 19:37:23.363864   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.362104   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.362665   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.362680   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.362295   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.365265   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0626 19:37:23.366770   14846 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0626 19:37:23.365322   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.365560   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.365604   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.365609   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.366006   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.368079   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.368165   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0626 19:37:23.368175   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0626 19:37:23.368191   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.368910   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.368904   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.369112   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.370356   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.372423   14846 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0626 19:37:23.373945   14846 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 19:37:23.373958   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 19:37:23.373969   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.372787   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.371922   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.374036   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.374061   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.373452   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.375819   14846 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 19:37:23.374263   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.376722   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.377100   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.377124   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.377163   14846 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 19:37:23.377177   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 19:37:23.377192   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.377279   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.377337   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.377499   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.377507   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.377659   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.377767   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.379689   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.380033   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.380064   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.380205   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.380353   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.380472   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.380605   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.487349   14846 addons.go:228] Setting addon default-storageclass=true in "addons-118062"
	I0626 19:37:23.487393   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:23.487673   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.487706   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.503607   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I0626 19:37:23.504037   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.504465   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.504489   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.504829   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.505250   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:23.505313   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:23.520239   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0626 19:37:23.520628   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:23.521069   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:23.521094   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:23.521350   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:23.521533   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:23.522941   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:23.523157   14846 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 19:37:23.523168   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 19:37:23.523180   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:23.525473   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.525893   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:23.525925   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:23.526059   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:23.526202   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:23.526333   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:23.526488   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:23.583586   14846 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0626 19:37:23.583613   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0626 19:37:23.604115   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0626 19:37:23.619232   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0626 19:37:23.632042   14846 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0626 19:37:23.632071   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0626 19:37:23.639491   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0626 19:37:23.639514   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0626 19:37:23.646741   14846 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0626 19:37:23.646760   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0626 19:37:23.701930   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0626 19:37:23.710867   14846 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 19:37:23.710897   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0626 19:37:23.717172   14846 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0626 19:37:23.717197   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0626 19:37:23.745476   14846 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 19:37:23.799558   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 19:37:23.810726   14846 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0626 19:37:23.810752   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0626 19:37:23.814526   14846 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0626 19:37:23.814543   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0626 19:37:23.854191   14846 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0626 19:37:23.854211   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0626 19:37:23.857251   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0626 19:37:23.857270   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0626 19:37:23.874260   14846 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0626 19:37:23.874283   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0626 19:37:23.879057   14846 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 19:37:23.879077   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 19:37:23.936719   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 19:37:23.938019   14846 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0626 19:37:23.938043   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0626 19:37:23.961735   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0626 19:37:24.125832   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0626 19:37:24.125855   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0626 19:37:24.164847   14846 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-118062" context rescaled to 1 replicas
	I0626 19:37:24.164890   14846 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 19:37:24.167991   14846 out.go:177] * Verifying Kubernetes components...
	I0626 19:37:24.169495   14846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 19:37:24.180411   14846 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 19:37:24.180441   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 19:37:24.181382   14846 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0626 19:37:24.181402   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0626 19:37:24.203324   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0626 19:37:24.233299   14846 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0626 19:37:24.233326   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0626 19:37:24.237073   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0626 19:37:24.237089   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0626 19:37:24.317726   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0626 19:37:24.317751   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0626 19:37:24.347626   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 19:37:24.381252   14846 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0626 19:37:24.381274   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0626 19:37:24.381903   14846 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0626 19:37:24.381929   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0626 19:37:24.425367   14846 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0626 19:37:24.425404   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0626 19:37:24.462866   14846 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0626 19:37:24.462893   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0626 19:37:24.465105   14846 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0626 19:37:24.465124   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0626 19:37:24.522377   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0626 19:37:24.552848   14846 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0626 19:37:24.552874   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0626 19:37:24.568931   14846 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0626 19:37:24.568956   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0626 19:37:24.761260   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0626 19:37:24.761742   14846 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0626 19:37:24.761760   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0626 19:37:24.824218   14846 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0626 19:37:24.824239   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0626 19:37:24.915825   14846 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0626 19:37:24.915849   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0626 19:37:24.971264   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0626 19:37:30.519799   14846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0626 19:37:30.519851   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:30.522932   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:30.523346   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:30.523378   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:30.523486   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:30.523682   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:30.523827   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:30.523955   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:30.995299   14846 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0626 19:37:31.030489   14846 addons.go:228] Setting addon gcp-auth=true in "addons-118062"
	I0626 19:37:31.030547   14846 host.go:66] Checking if "addons-118062" exists ...
	I0626 19:37:31.030977   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:31.031030   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:31.046818   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
	I0626 19:37:31.047300   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:31.047779   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:31.047802   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:31.048207   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:31.048937   14846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:37:31.048983   14846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:37:31.063663   14846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0626 19:37:31.064112   14846 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:37:31.064595   14846 main.go:141] libmachine: Using API Version  1
	I0626 19:37:31.064622   14846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:37:31.064926   14846 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:37:31.065099   14846 main.go:141] libmachine: (addons-118062) Calling .GetState
	I0626 19:37:31.066515   14846 main.go:141] libmachine: (addons-118062) Calling .DriverName
	I0626 19:37:31.066785   14846 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0626 19:37:31.066807   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHHostname
	I0626 19:37:31.068937   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:31.069315   14846 main.go:141] libmachine: (addons-118062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:fd:20", ip: ""} in network mk-addons-118062: {Iface:virbr1 ExpiryTime:2023-06-26 20:36:39 +0000 UTC Type:0 Mac:52:54:00:8e:fd:20 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:addons-118062 Clientid:01:52:54:00:8e:fd:20}
	I0626 19:37:31.069345   14846 main.go:141] libmachine: (addons-118062) DBG | domain addons-118062 has defined IP address 192.168.39.92 and MAC address 52:54:00:8e:fd:20 in network mk-addons-118062
	I0626 19:37:31.069576   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHPort
	I0626 19:37:31.069780   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHKeyPath
	I0626 19:37:31.069977   14846 main.go:141] libmachine: (addons-118062) Calling .GetSSHUsername
	I0626 19:37:31.070145   14846 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/addons-118062/id_rsa Username:docker}
	I0626 19:37:32.194538   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.575277396s)
	I0626 19:37:32.194581   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194594   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194595   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.49263532s)
	I0626 19:37:32.194618   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194631   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194546   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.590399711s)
	I0626 19:37:32.194672   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194687   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194718   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.3951261s)
	I0626 19:37:32.194741   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194754   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194764   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.258017935s)
	I0626 19:37:32.194781   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194790   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194848   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.23307968s)
	I0626 19:37:32.194864   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.194870   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194873   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.194880   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194884   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.194893   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.194923   14846 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.025410881s)
	I0626 19:37:32.194980   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.195010   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.195037   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.991681828s)
	I0626 19:37:32.195044   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.195054   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.195056   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.195064   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.195067   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.195125   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.195133   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.195141   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.195151   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.195161   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.847507635s)
	I0626 19:37:32.195177   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.195186   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.195195   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.195321   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.672887124s)
	W0626 19:37:32.195345   14846 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0626 19:37:32.195367   14846 retry.go:31] will retry after 326.464613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0626 19:37:32.195451   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.434155288s)
	I0626 19:37:32.195467   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.195477   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.195687   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.195716   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.195727   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.195738   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.195746   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.195792   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.195812   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.195819   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.195918   14846 node_ready.go:35] waiting up to 6m0s for node "addons-118062" to be "Ready" ...
	I0626 19:37:32.196106   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.196117   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.196125   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.196135   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.197197   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.197229   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.197238   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.197247   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.197255   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.197311   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.197331   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.197340   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.198051   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.198078   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.198086   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.198146   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.198154   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.198164   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.198173   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.198221   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.198239   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.198247   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.198466   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.198494   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.198503   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.198511   14846 addons.go:464] Verifying addon metrics-server=true in "addons-118062"
	I0626 19:37:32.194660   14846 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.449150599s)
	I0626 19:37:32.199135   14846 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 19:37:32.199350   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.199380   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.199405   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.199412   14846 addons.go:464] Verifying addon registry=true in "addons-118062"
	I0626 19:37:32.202342   14846 out.go:177] * Verifying registry addon...
	I0626 19:37:32.199658   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.199740   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.199759   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.199764   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.199776   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.204237   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.204247   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.204266   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.204283   14846 addons.go:464] Verifying addon ingress=true in "addons-118062"
	I0626 19:37:32.204269   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.204318   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.204253   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.204359   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.205984   14846 out.go:177] * Verifying ingress addon...
	I0626 19:37:32.204614   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.204639   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.204664   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.204668   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:32.205151   14846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0626 19:37:32.206068   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.206057   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.207466   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:32.207478   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:32.207723   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:32.207748   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:32.208183   14846 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0626 19:37:32.215581   14846 node_ready.go:49] node "addons-118062" has status "Ready":"True"
	I0626 19:37:32.215605   14846 node_ready.go:38] duration metric: took 19.673115ms waiting for node "addons-118062" to be "Ready" ...
	I0626 19:37:32.215615   14846 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 19:37:32.235202   14846 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0626 19:37:32.235229   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:32.235281   14846 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0626 19:37:32.235292   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:32.251309   14846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace to be "Ready" ...
	I0626 19:37:32.522801   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0626 19:37:32.750272   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:32.754154   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:33.066204   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.094888572s)
	I0626 19:37:33.066271   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:33.066228   14846 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.999421076s)
	I0626 19:37:33.068204   14846 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0626 19:37:33.066283   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:33.071399   14846 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0626 19:37:33.069933   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:33.069962   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:33.072877   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:33.072894   14846 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0626 19:37:33.072904   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:33.072905   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0626 19:37:33.072918   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:33.073174   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:33.074186   14846 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0626 19:37:33.074204   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:33.074214   14846 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-118062"
	I0626 19:37:33.075932   14846 out.go:177] * Verifying csi-hostpath-driver addon...
	I0626 19:37:33.078432   14846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0626 19:37:33.184693   14846 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0626 19:37:33.184720   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0626 19:37:33.250839   14846 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0626 19:37:33.250860   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:33.269507   14846 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0626 19:37:33.269533   14846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0626 19:37:33.310172   14846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0626 19:37:33.320604   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:33.360617   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:33.742631   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:33.751036   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:33.760383   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:34.326661   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:34.326784   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:34.384193   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:34.437676   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:34.740680   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:34.745042   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:34.787676   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:35.298103   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:35.309998   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:35.310295   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:35.784835   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:35.802217   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:35.802450   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:35.962619   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.439777139s)
	I0626 19:37:35.962672   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:35.962685   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:35.962940   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:35.962966   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:35.962975   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:35.962995   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:35.963251   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:35.963264   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:35.963281   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:36.289469   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:36.289565   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:36.295058   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:36.316002   14846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.005792834s)
	I0626 19:37:36.316046   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:36.316057   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:36.316401   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:36.316421   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:36.316432   14846 main.go:141] libmachine: Making call to close driver server
	I0626 19:37:36.316429   14846 main.go:141] libmachine: (addons-118062) DBG | Closing plugin on server side
	I0626 19:37:36.316443   14846 main.go:141] libmachine: (addons-118062) Calling .Close
	I0626 19:37:36.316671   14846 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:37:36.316694   14846 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:37:36.318306   14846 addons.go:464] Verifying addon gcp-auth=true in "addons-118062"
	I0626 19:37:36.319944   14846 out.go:177] * Verifying gcp-auth addon...
	I0626 19:37:36.322289   14846 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0626 19:37:36.353358   14846 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0626 19:37:36.353395   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:36.745432   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:36.746180   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:36.759402   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:36.824833   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:36.857010   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:37.245706   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:37.246367   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:37.255719   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:37.359068   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:37.741586   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:37.744063   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:37.756824   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:37.859138   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:38.241836   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:38.242811   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:38.264475   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:38.359832   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:38.741159   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:38.746024   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:38.759123   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:38.857612   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:39.240668   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:39.244221   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:39.258939   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:39.312052   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:39.359121   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:39.741772   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:39.742253   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:39.763651   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:39.870475   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:40.249527   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:40.249528   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:40.267368   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:40.356844   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:40.744676   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:40.755021   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:40.762492   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:40.857770   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:41.241098   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:41.242691   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:41.261322   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:41.313435   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:41.360727   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:41.746403   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:41.747681   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:41.756446   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:41.861005   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:42.242756   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:42.243142   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:42.259622   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:42.357193   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:42.740875   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:42.741049   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:42.759624   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:42.859364   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:43.243684   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:43.245154   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:43.256981   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:43.325639   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:43.382135   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:43.741303   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:43.742357   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:43.757587   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:43.859118   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:44.252218   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:44.252482   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:44.302980   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:44.358674   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:44.743403   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:44.743664   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:44.757164   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:44.860385   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:45.242814   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:45.268393   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:45.268667   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:45.358859   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:45.742367   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:45.742377   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:45.755761   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:45.812590   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:45.857021   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:46.242737   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:46.242908   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:46.256540   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:46.358259   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:46.741735   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:46.742743   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:46.756388   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:46.857654   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:47.241014   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:47.241082   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:47.257034   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:47.358902   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:47.741208   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:47.741863   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:47.757777   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:47.816129   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:47.857171   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:48.240022   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:48.240478   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:48.256750   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:48.357852   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:48.742574   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:48.743245   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:48.757943   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:48.857669   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:49.240942   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:49.240969   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:49.258146   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:49.358626   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:49.740638   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:49.742247   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:49.756959   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:49.857849   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:50.240775   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:50.241562   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:50.256599   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:50.314095   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:50.358215   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:50.742282   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:50.746843   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:50.756942   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:50.858462   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:51.244508   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:51.248525   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:51.258061   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:51.360801   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:51.742391   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:51.742549   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:51.756783   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:51.859031   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:52.243075   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:52.243157   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:52.257456   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:52.358905   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:52.745572   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:52.748471   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:52.761302   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:52.814704   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:52.857758   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:53.242559   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:53.242836   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:53.256582   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:53.357525   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:53.740934   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:53.741855   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:53.756746   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:53.857502   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:54.241976   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:54.243430   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:54.258865   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:54.358028   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:54.741304   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:54.742558   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:54.756788   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:54.858429   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:55.244631   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:55.247738   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:55.258161   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:55.312901   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:55.357343   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:55.741818   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:55.742526   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:55.757958   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:55.858988   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:56.244604   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:56.245268   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:56.256856   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:56.358070   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:56.741568   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:56.743582   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:56.758234   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:56.858445   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:57.240977   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:57.242247   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:57.258774   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:57.313445   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:57.358473   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:57.742006   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:57.743853   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:57.756168   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:57.858095   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:58.242567   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:58.243045   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:58.257270   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:58.359185   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:58.945473   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:58.946007   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:58.946093   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:58.950145   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:59.246387   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:59.246807   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:59.257060   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:59.315169   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:37:59.359961   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:37:59.743139   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:37:59.743614   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:37:59.760929   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:37:59.868916   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:00.241738   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:00.243237   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:00.257805   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:00.358057   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:00.741582   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:00.742275   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:00.757791   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:00.859403   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:01.242567   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:01.242898   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:01.256150   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:01.357368   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:01.741641   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:01.743664   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:01.762802   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:01.814313   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:38:01.857759   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:02.242649   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:02.245679   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:02.263357   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:02.359413   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:02.740066   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:02.741716   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:02.756925   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:02.858324   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:03.240344   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:03.242516   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:03.256215   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:03.358205   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:03.842128   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:03.843028   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:03.843165   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:03.845352   14846 pod_ready.go:102] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"False"
	I0626 19:38:03.857496   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:04.240622   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:04.241748   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:04.258094   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:04.357328   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:04.741309   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:04.741603   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:04.756154   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:05.101093   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:05.241695   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:05.241850   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:05.257153   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:05.315148   14846 pod_ready.go:92] pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:05.315172   14846 pod_ready.go:81] duration metric: took 33.06383623s waiting for pod "coredns-5d78c9869d-9dfbm" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.315182   14846 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-j4lh2" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.318108   14846 pod_ready.go:97] error getting pod "coredns-5d78c9869d-j4lh2" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-j4lh2" not found
	I0626 19:38:05.318129   14846 pod_ready.go:81] duration metric: took 2.941226ms waiting for pod "coredns-5d78c9869d-j4lh2" in "kube-system" namespace to be "Ready" ...
	E0626 19:38:05.318137   14846 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-j4lh2" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-j4lh2" not found
	I0626 19:38:05.318143   14846 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.324334   14846 pod_ready.go:92] pod "etcd-addons-118062" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:05.324351   14846 pod_ready.go:81] duration metric: took 6.203195ms waiting for pod "etcd-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.324359   14846 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.329362   14846 pod_ready.go:92] pod "kube-apiserver-addons-118062" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:05.329394   14846 pod_ready.go:81] duration metric: took 5.028633ms waiting for pod "kube-apiserver-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.329403   14846 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.341270   14846 pod_ready.go:92] pod "kube-controller-manager-addons-118062" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:05.341295   14846 pod_ready.go:81] duration metric: took 11.886289ms waiting for pod "kube-controller-manager-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.341307   14846 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w9vvt" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.363586   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:05.509573   14846 pod_ready.go:92] pod "kube-proxy-w9vvt" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:05.509595   14846 pod_ready.go:81] duration metric: took 168.281764ms waiting for pod "kube-proxy-w9vvt" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.509605   14846 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.743957   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:05.744102   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:05.759673   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:05.858080   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:05.911424   14846 pod_ready.go:92] pod "kube-scheduler-addons-118062" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:05.911450   14846 pod_ready.go:81] duration metric: took 401.83889ms waiting for pod "kube-scheduler-addons-118062" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:05.911463   14846 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-944s6" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:06.242445   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:06.245119   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:06.256204   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:06.357875   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:06.741772   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:06.741780   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:06.756557   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:06.857148   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:07.243028   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:07.243059   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:07.256686   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:07.357482   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:07.743572   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:07.743697   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:07.756682   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:07.859129   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:08.865779   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:08.866958   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:08.866961   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:08.881666   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:08.882298   14846 pod_ready.go:102] pod "metrics-server-844d8db974-944s6" in "kube-system" namespace has status "Ready":"False"
	I0626 19:38:08.886041   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:08.899794   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:08.900188   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:08.900283   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:09.247582   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:09.249611   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:09.260964   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:09.358367   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:09.742363   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:09.742639   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:09.757523   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:09.857358   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:10.242944   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:10.243175   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:10.258278   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:10.359433   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:10.739809   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:10.742294   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:10.756191   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:10.858070   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:11.243173   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:11.243410   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:11.257682   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:11.318590   14846 pod_ready.go:102] pod "metrics-server-844d8db974-944s6" in "kube-system" namespace has status "Ready":"False"
	I0626 19:38:11.358003   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:11.742913   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:11.743135   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:11.756727   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:11.858836   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:12.245339   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:12.245952   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:12.263374   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:12.357153   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:12.746375   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:12.746553   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:12.761491   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:12.864074   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:13.241833   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:13.242936   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:13.260057   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:13.324624   14846 pod_ready.go:102] pod "metrics-server-844d8db974-944s6" in "kube-system" namespace has status "Ready":"False"
	I0626 19:38:13.357151   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:13.743527   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:13.744369   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:13.756573   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:13.889317   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:14.246585   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:14.250394   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:14.264475   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:14.328718   14846 pod_ready.go:92] pod "metrics-server-844d8db974-944s6" in "kube-system" namespace has status "Ready":"True"
	I0626 19:38:14.328749   14846 pod_ready.go:81] duration metric: took 8.417277614s waiting for pod "metrics-server-844d8db974-944s6" in "kube-system" namespace to be "Ready" ...
	I0626 19:38:14.328775   14846 pod_ready.go:38] duration metric: took 42.113146692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 19:38:14.328796   14846 api_server.go:52] waiting for apiserver process to appear ...
	I0626 19:38:14.328846   14846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 19:38:14.388795   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:14.391902   14846 api_server.go:72] duration metric: took 50.226966796s to wait for apiserver process to appear ...
	I0626 19:38:14.391923   14846 api_server.go:88] waiting for apiserver healthz status ...
	I0626 19:38:14.391941   14846 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0626 19:38:14.404332   14846 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0626 19:38:14.406000   14846 api_server.go:141] control plane version: v1.27.3
	I0626 19:38:14.406026   14846 api_server.go:131] duration metric: took 14.095257ms to wait for apiserver health ...
	I0626 19:38:14.406035   14846 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 19:38:14.433705   14846 system_pods.go:59] 17 kube-system pods found
	I0626 19:38:14.433732   14846 system_pods.go:61] "coredns-5d78c9869d-9dfbm" [b14e2e5d-eda6-438a-94df-20024dc391e7] Running
	I0626 19:38:14.433737   14846 system_pods.go:61] "csi-hostpath-attacher-0" [fc7f4613-39b6-4c42-89b8-33bfc7685209] Running
	I0626 19:38:14.433743   14846 system_pods.go:61] "csi-hostpath-resizer-0" [60deef35-5bd0-497a-a8b5-e297085ddcbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0626 19:38:14.433751   14846 system_pods.go:61] "csi-hostpathplugin-zv6ln" [a271cda6-54ab-4469-93b2-edfae1c59a49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0626 19:38:14.433759   14846 system_pods.go:61] "etcd-addons-118062" [4a80b6ee-a77b-435f-8d18-bfcd4b343703] Running
	I0626 19:38:14.433763   14846 system_pods.go:61] "kube-apiserver-addons-118062" [50f05820-e1b5-473a-a67d-9deb82547f13] Running
	I0626 19:38:14.433768   14846 system_pods.go:61] "kube-controller-manager-addons-118062" [83058550-80a8-45e7-a427-4b02a7c6b4a2] Running
	I0626 19:38:14.433773   14846 system_pods.go:61] "kube-ingress-dns-minikube" [29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c] Running
	I0626 19:38:14.433777   14846 system_pods.go:61] "kube-proxy-w9vvt" [cc946b0c-a7b4-46c1-938e-ed86d2139ad9] Running
	I0626 19:38:14.433780   14846 system_pods.go:61] "kube-scheduler-addons-118062" [503b9959-b314-42df-abd9-8fd0bdc65397] Running
	I0626 19:38:14.433789   14846 system_pods.go:61] "metrics-server-844d8db974-944s6" [02b88f9d-a6aa-4824-b872-389e6fe198a8] Running
	I0626 19:38:14.433796   14846 system_pods.go:61] "registry-proxy-5bdlk" [22b2f5ee-9096-47f8-87ec-b8917bab1abe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0626 19:38:14.433802   14846 system_pods.go:61] "registry-zjfg6" [b91c3b06-35cc-451a-bbef-ba61f98d3f3f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0626 19:38:14.433809   14846 system_pods.go:61] "snapshot-controller-75bbb956b9-cq7rk" [870905dc-f3aa-44f3-ae92-db2ac692c081] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0626 19:38:14.433816   14846 system_pods.go:61] "snapshot-controller-75bbb956b9-k5k29" [10d7be63-0e4b-4436-ace8-c42f7ecfe6f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0626 19:38:14.433829   14846 system_pods.go:61] "storage-provisioner" [845c2d86-c88a-4b6f-8691-6fe83adda0a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 19:38:14.433835   14846 system_pods.go:61] "tiller-deploy-6847666dc-bzkls" [740b2eee-2d63-4a0c-a3ac-aa6fb6ff775c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0626 19:38:14.433841   14846 system_pods.go:74] duration metric: took 27.801135ms to wait for pod list to return data ...
	I0626 19:38:14.433851   14846 default_sa.go:34] waiting for default service account to be created ...
	I0626 19:38:14.436930   14846 default_sa.go:45] found service account: "default"
	I0626 19:38:14.436953   14846 default_sa.go:55] duration metric: took 3.096763ms for default service account to be created ...
	I0626 19:38:14.436962   14846 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 19:38:14.447097   14846 system_pods.go:86] 17 kube-system pods found
	I0626 19:38:14.447127   14846 system_pods.go:89] "coredns-5d78c9869d-9dfbm" [b14e2e5d-eda6-438a-94df-20024dc391e7] Running
	I0626 19:38:14.447136   14846 system_pods.go:89] "csi-hostpath-attacher-0" [fc7f4613-39b6-4c42-89b8-33bfc7685209] Running
	I0626 19:38:14.447146   14846 system_pods.go:89] "csi-hostpath-resizer-0" [60deef35-5bd0-497a-a8b5-e297085ddcbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0626 19:38:14.447158   14846 system_pods.go:89] "csi-hostpathplugin-zv6ln" [a271cda6-54ab-4469-93b2-edfae1c59a49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0626 19:38:14.447165   14846 system_pods.go:89] "etcd-addons-118062" [4a80b6ee-a77b-435f-8d18-bfcd4b343703] Running
	I0626 19:38:14.447173   14846 system_pods.go:89] "kube-apiserver-addons-118062" [50f05820-e1b5-473a-a67d-9deb82547f13] Running
	I0626 19:38:14.447181   14846 system_pods.go:89] "kube-controller-manager-addons-118062" [83058550-80a8-45e7-a427-4b02a7c6b4a2] Running
	I0626 19:38:14.447192   14846 system_pods.go:89] "kube-ingress-dns-minikube" [29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c] Running
	I0626 19:38:14.447200   14846 system_pods.go:89] "kube-proxy-w9vvt" [cc946b0c-a7b4-46c1-938e-ed86d2139ad9] Running
	I0626 19:38:14.447208   14846 system_pods.go:89] "kube-scheduler-addons-118062" [503b9959-b314-42df-abd9-8fd0bdc65397] Running
	I0626 19:38:14.447215   14846 system_pods.go:89] "metrics-server-844d8db974-944s6" [02b88f9d-a6aa-4824-b872-389e6fe198a8] Running
	I0626 19:38:14.447226   14846 system_pods.go:89] "registry-proxy-5bdlk" [22b2f5ee-9096-47f8-87ec-b8917bab1abe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0626 19:38:14.447239   14846 system_pods.go:89] "registry-zjfg6" [b91c3b06-35cc-451a-bbef-ba61f98d3f3f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0626 19:38:14.447250   14846 system_pods.go:89] "snapshot-controller-75bbb956b9-cq7rk" [870905dc-f3aa-44f3-ae92-db2ac692c081] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0626 19:38:14.447264   14846 system_pods.go:89] "snapshot-controller-75bbb956b9-k5k29" [10d7be63-0e4b-4436-ace8-c42f7ecfe6f5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0626 19:38:14.447275   14846 system_pods.go:89] "storage-provisioner" [845c2d86-c88a-4b6f-8691-6fe83adda0a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 19:38:14.447288   14846 system_pods.go:89] "tiller-deploy-6847666dc-bzkls" [740b2eee-2d63-4a0c-a3ac-aa6fb6ff775c] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0626 19:38:14.447298   14846 system_pods.go:126] duration metric: took 10.329637ms to wait for k8s-apps to be running ...
	I0626 19:38:14.447310   14846 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 19:38:14.447359   14846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 19:38:14.504353   14846 system_svc.go:56] duration metric: took 57.036365ms WaitForService to wait for kubelet.
	I0626 19:38:14.504380   14846 kubeadm.go:581] duration metric: took 50.339448755s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 19:38:14.504411   14846 node_conditions.go:102] verifying NodePressure condition ...
	I0626 19:38:14.511412   14846 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 19:38:14.511439   14846 node_conditions.go:123] node cpu capacity is 2
	I0626 19:38:14.511449   14846 node_conditions.go:105] duration metric: took 7.033785ms to run NodePressure ...
	I0626 19:38:14.511459   14846 start.go:228] waiting for startup goroutines ...
	I0626 19:38:14.742120   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:14.745991   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:14.756177   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:14.858869   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:15.240970   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:15.243128   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:15.256671   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:15.357849   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:15.742256   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:15.742375   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:15.757442   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:15.863383   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:16.240233   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:16.241644   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:16.257677   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:16.361056   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:16.742859   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:16.743491   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:16.757193   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:16.858274   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:17.241576   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:17.241782   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:17.257748   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:17.362562   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:17.740717   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:17.741297   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:17.757360   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:17.857344   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:18.241175   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:18.241725   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:18.257506   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:18.358929   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:18.740596   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:18.743307   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:18.756207   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:18.857142   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:19.241405   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:19.242923   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:19.257288   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:19.358015   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:19.741213   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:19.741507   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:19.756625   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:19.857888   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:20.247998   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:20.248369   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:20.257223   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:20.359940   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:20.742625   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:20.742796   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:20.756799   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:20.859071   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:21.243123   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:21.243414   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:21.257140   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:21.357341   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:21.748539   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:21.748617   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:21.756172   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:21.856966   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:22.242456   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:22.242465   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:22.255988   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:22.357901   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:22.742319   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:22.742623   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:22.757576   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:22.859320   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:23.246988   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:23.247277   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:23.260681   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:23.357654   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:23.741486   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:23.741921   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:23.757935   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:23.858366   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:24.248147   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:24.248331   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:24.257564   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:24.357726   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:24.740832   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:24.743616   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:24.756181   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:24.857833   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:25.240983   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:25.249990   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:25.259862   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:25.358239   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:25.742975   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:25.744476   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:25.766137   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:25.856792   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:26.240498   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:26.241258   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:26.256667   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:26.357730   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:26.751468   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:26.751643   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:26.762071   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:26.858665   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:27.248400   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:27.248613   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:27.256554   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:27.357778   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:27.741617   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:27.741617   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:27.757973   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:27.857971   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:28.242619   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:28.242857   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:28.263499   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:28.359172   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:28.743049   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:28.743321   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:28.758430   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:28.863750   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:29.240609   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:29.241785   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:29.256887   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:29.359274   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:29.740926   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:29.742421   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:29.757949   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:29.857876   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:30.242598   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:30.243416   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:30.266391   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:30.358324   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:30.740113   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:30.741120   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:30.757785   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:30.861115   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:31.240873   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:31.242182   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:31.257160   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:31.357732   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:31.741069   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:31.742691   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:31.757548   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:31.857871   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:32.249413   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:32.253594   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:32.261049   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:32.358693   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:32.742304   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:32.742755   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:32.757044   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:32.858394   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:33.240425   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:33.240796   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:33.258680   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:33.357370   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:33.742770   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:33.743602   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:33.757319   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:33.858211   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:34.240506   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:34.240653   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:34.256989   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:34.357646   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:34.745154   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:34.745861   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:34.767621   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:34.870582   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:35.240074   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:35.243935   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:35.263263   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:35.364602   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:35.740318   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:35.740484   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:35.756990   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:35.857740   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:36.242690   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:36.242696   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:36.257299   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:36.358903   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:36.751187   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:36.751477   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:36.758657   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:36.861263   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:37.240207   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:37.241210   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:37.257940   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:37.359762   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:37.741254   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:37.741333   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:37.756918   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:37.859759   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:38.244224   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:38.259435   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:38.330607   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:38.364860   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:38.740422   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:38.743176   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:38.763979   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:38.857951   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:39.247816   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 19:38:39.247993   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:39.256338   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:39.360104   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:39.741213   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:39.746093   14846 kapi.go:107] duration metric: took 1m7.54093974s to wait for kubernetes.io/minikube-addons=registry ...
	I0626 19:38:39.758362   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:39.857918   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:40.243544   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:40.256924   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:40.358456   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:40.741022   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:40.760747   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:40.858082   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:41.241546   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:41.257009   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:41.358279   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:41.740349   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:41.758103   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:41.859341   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:42.239808   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:42.263436   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:42.365626   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:42.740838   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:42.757421   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:42.862045   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:43.365473   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:43.365585   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:43.394547   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:43.741145   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:43.759752   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:43.858275   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:44.240055   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:44.256817   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:44.359441   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:44.742314   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:44.762231   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:44.856811   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:45.240540   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:45.255734   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:45.358173   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:45.742538   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:45.758793   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:45.858269   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:46.240793   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:46.258554   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:46.357705   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:46.740761   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:46.757992   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:46.858166   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:47.240757   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:47.256697   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:47.357904   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:47.740659   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:47.758038   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:47.857854   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:48.241332   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:48.272948   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:48.358646   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:48.760135   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:48.764354   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:48.861454   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:49.240417   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:49.258242   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:49.357858   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:49.741160   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:49.756124   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:49.867017   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:50.241013   14846 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 19:38:50.271277   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:50.360482   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:50.741750   14846 kapi.go:107] duration metric: took 1m18.533566162s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0626 19:38:50.755886   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:50.859728   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:51.260695   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:51.363314   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:51.762661   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:51.857817   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:52.258475   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:52.357436   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:52.757706   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:52.858390   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:53.258253   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:53.361902   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:53.757818   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:53.857283   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:54.257196   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:54.358179   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:54.758912   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:54.858068   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:55.256658   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:55.357789   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 19:38:55.762761   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:55.857974   14846 kapi.go:107] duration metric: took 1m19.535680611s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0626 19:38:55.860090   14846 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-118062 cluster.
	I0626 19:38:55.861737   14846 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0626 19:38:55.863321   14846 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0626 19:38:56.257717   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:56.769681   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:57.258145   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:57.872602   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:58.257225   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:58.757691   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:59.257230   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:38:59.757830   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:39:00.259222   14846 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 19:39:00.757824   14846 kapi.go:107] duration metric: took 1m27.679389366s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0626 19:39:00.760031   14846 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, metrics-server, helm-tiller, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0626 19:39:00.761621   14846 addons.go:499] enable addons completed in 1m37.520601305s: enabled=[cloud-spanner ingress-dns inspektor-gadget storage-provisioner metrics-server helm-tiller default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0626 19:39:00.761662   14846 start.go:233] waiting for cluster config update ...
	I0626 19:39:00.761681   14846 start.go:242] writing updated cluster config ...
	I0626 19:39:00.761940   14846 ssh_runner.go:195] Run: rm -f paused
	I0626 19:39:00.813619   14846 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 19:39:00.815879   14846 out.go:177] * Done! kubectl is now configured to use "addons-118062" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 19:36:36 UTC, ends at Mon 2023-06-26 19:41:51 UTC. --
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.262258864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c2e75d5-a897-45f2-bde9-8bdf54000476 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.296902024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3fee9398-4c1d-4100-b14c-053acb511bd2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.296967744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3fee9398-4c1d-4100-b14c-053acb511bd2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.297332077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3fee9398-4c1d-4100-b14c-053acb511bd2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.335051263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3febf071-e8f6-40a2-8b3f-0aeb616eea6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.335167623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3febf071-e8f6-40a2-8b3f-0aeb616eea6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.335514284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3febf071-e8f6-40a2-8b3f-0aeb616eea6b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.383101914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a92cda8b-1b38-4d07-b17c-2fb699bfa769 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.383219705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a92cda8b-1b38-4d07-b17c-2fb699bfa769 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.383607154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a92cda8b-1b38-4d07-b17c-2fb699bfa769 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.428918374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=345269bc-1e6c-4d3b-90b7-3dea0794a693 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.429015270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=345269bc-1e6c-4d3b-90b7-3dea0794a693 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.429363177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=345269bc-1e6c-4d3b-90b7-3dea0794a693 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.465991816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cfe3c51-18cc-4761-b014-5a1b6ef82a62 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.466055943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cfe3c51-18cc-4761-b014-5a1b6ef82a62 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.466407650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cfe3c51-18cc-4761-b014-5a1b6ef82a62 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.502662023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=17bb15fa-ba03-4638-b888-c8ae2ddad4cc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.502729886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=17bb15fa-ba03-4638-b888-c8ae2ddad4cc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.503146053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=17bb15fa-ba03-4638-b888-c8ae2ddad4cc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.540429409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ec4e49c3-435e-4bff-b819-e00bcc943941 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.540513993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ec4e49c3-435e-4bff-b819-e00bcc943941 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.540885479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ec4e49c3-435e-4bff-b819-e00bcc943941 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.577365533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8b1867c3-d964-481d-833d-d57076b324e8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.577435841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8b1867c3-d964-481d-833d-d57076b324e8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:41:51 addons-118062 crio[718]: time="2023-06-26 19:41:51.577742594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a5c756e7393cfd6de7950dc5b88c7efa7b8da2787d418c4adc2ff65d21b6bea,PodSandboxId:01093b31b66f7f4b4a9be5fde47e73f2531b2d968bb36b54035940aa48b2ea04,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687808504760430105,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-vk64k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4bca1389-ea78-4dac-b549-e19fd9eb37e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6a78e31,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a42dec9acf33c5a8247d30a4bd28dfd37e27ea573fbc7a54fc9aaa3220ed39f,PodSandboxId:d4e752917e9893e890c6efaf7ce15d9c1893ee5838ce1f46a676aa8207efedfc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687808362547385647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3ca141-9ce1-4c7a-bc14-61d87f501c0b,},Annotations:map[string]string{io.kubernet
es.container.hash: e92b5b55,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:539f5e83ffad88c4d974d1e1b105580f82512b934aa709fa1f0271b1a8c6fc5c,PodSandboxId:4a10ec27b80d05d48db78a0f44d80159ed7932046aed2ad3e6b4b4b859ef1c07,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1687808349621779671,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-22z64,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1771f198-1e08-4649-bcbd-a15cc6d44d8d,},Annotations:map[string]string{io.kubernetes.container.hash: e92c7f0f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568,PodSandboxId:afe328c77e98631b17971106e2745f22afe5f8da23d037c106b30cd03ea31933,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1687808334685237970,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-zdvq4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: ab9c4e0d-19c8-4032-a961-f3ebe80eca5e,},Annotations:map[string]string{io.kubernetes.container.hash: 7278ba82,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ec2229152f1cc591564d2bbf1a12511b451ec0e6cfe87e9a512b088ba82ca4,PodSandboxId:1c18a574ea3877a4bbd73fe5105a508e323740badbbddae508bfd63c904b9b99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808
324360597640,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-722bw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff,},Annotations:map[string]string{io.kubernetes.container.hash: 784feca4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5ffe139ee033e0c9ac3d70980a01bf4575cbdd4dd5eb1c3f69e0066962557c,PodSandboxId:7cdd3c97413c23dca7b585fce4fbd1cd230100c14d20dc1164ad440837be5e24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1687808314954472048,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4slff,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fd62a62e-8d69-4a78-8a9d-8d0bec28d653,},Annotations:map[string]string{io.kubernetes.container.hash: a3c74d4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c4
41c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687808293886651582,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a,PodSandboxId:ab34b98950be9364c5b0dd6e08037ff70d9b484a5f57d6625925dd7b2368ffab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687808262490165520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845c2d86-c88a-4b6f-8691-6fe83adda0a9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b137ced,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb,PodSandboxId:14621323101b4f55983a0d21d05e7b17f3a63a155bd1c4bdd19bc0c0d456af06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,Sta
te:CONTAINER_RUNNING,CreatedAt:1687808260291614687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9vvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc946b0c-a7b4-46c1-938e-ed86d2139ad9,},Annotations:map[string]string{io.kubernetes.container.hash: f454e97c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903,PodSandboxId:f1f550ccb7b7b90ef94d5e50357d7fca2067999d3bb832585caefe8908881361,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:16
87808249277879854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-9dfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14e2e5d-eda6-438a-94df-20024dc391e7,},Annotations:map[string]string{io.kubernetes.container.hash: 975be9de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67,PodSandboxId:72ad201fabef91487faac6f2a70dcda632ab833ba2205606996071dce564a3fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d96
68ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687808222415767384,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1428c15b5af0c103000a07ddb693ecc,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7,PodSandboxId:03c369fd0ab1477892caba2913d7d6eb4832c67a9bcaa35ebd4bfec11f9177ca,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Ann
otations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687808222217916603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7400425696090f42b637a7b9ae15c8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5060fb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587,PodSandboxId:df4da0c3c12c8d093b20f8327b2c7aa3428498f448601d1bdc08efff8fe1c0ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageR
ef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687808221923771338,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6393c2003e1d0f51c9fdca8ed6ec73,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30,PodSandboxId:6409143effac092bd9edbc66ffe77af50749977b8cae1548e86e0cb67318b926,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]strin
g{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687808221906214277,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-118062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104caea1e4fdb2f087634a5cce6e66b,},Annotations:map[string]string{io.kubernetes.container.hash: a97b0016,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8b1867c3-d964-481d-833d-d57076b324e8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	2a5c756e7393c       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      6 seconds ago       Running             hello-world-app           0                   01093b31b66f7
	6a42dec9acf33       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   d4e752917e989
	539f5e83ffad8       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   4a10ec27b80d0
	a8edc14e5dda2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   afe328c77e986
	a8ec2229152f1       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             3 minutes ago       Exited              patch                     2                   1c18a574ea387
	df5ffe139ee03       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   7cdd3c97413c2
	9736cd104706c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   ab34b98950be9
	0a69307f03692       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   ab34b98950be9
	792ad5629bc46       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             4 minutes ago       Running             kube-proxy                0                   14621323101b4
	dd3dd7a2b9223       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   f1f550ccb7b7b
	3dbac1ed2f4fa       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   72ad201fabef9
	8b7e5154f2e20       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   03c369fd0ab14
	3157c99719fa6       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   df4da0c3c12c8
	6d53bd003cb3d       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   6409143effac0
	
	* 
	* ==> coredns [dd3dd7a2b922382a872b00d817936633712abd8b7dc3bee2f223c9359c556903] <==
	* [INFO] 10.244.0.7:38275 - 751 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000237163s
	[INFO] 10.244.0.7:58995 - 8417 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044312s
	[INFO] 10.244.0.7:58995 - 42979 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023918s
	[INFO] 10.244.0.7:50810 - 37977 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00002566s
	[INFO] 10.244.0.7:50810 - 37975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028944s
	[INFO] 10.244.0.7:47033 - 9657 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000024442s
	[INFO] 10.244.0.7:47033 - 63416 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00002006s
	[INFO] 10.244.0.7:57538 - 40768 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000099511s
	[INFO] 10.244.0.7:57538 - 53069 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073127s
	[INFO] 10.244.0.7:45054 - 64494 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087965s
	[INFO] 10.244.0.7:45054 - 25068 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074729s
	[INFO] 10.244.0.7:41476 - 20535 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072641s
	[INFO] 10.244.0.7:41476 - 1081 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060808s
	[INFO] 10.244.0.7:57289 - 44528 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078692s
	[INFO] 10.244.0.7:57289 - 2802 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000060818s
	[INFO] 10.244.0.19:58224 - 59717 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000852418s
	[INFO] 10.244.0.19:45755 - 40109 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000161779s
	[INFO] 10.244.0.19:45958 - 22186 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106861s
	[INFO] 10.244.0.19:40046 - 30358 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079s
	[INFO] 10.244.0.19:43322 - 48040 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074322s
	[INFO] 10.244.0.19:46067 - 1725 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070627s
	[INFO] 10.244.0.19:46952 - 17936 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000789582s
	[INFO] 10.244.0.19:44266 - 1825 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000458046s
	[INFO] 10.244.0.21:43424 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254244s
	[INFO] 10.244.0.21:34479 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000804008s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-118062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-118062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=addons-118062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T19_37_10_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-118062
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 19:37:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-118062
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 19:41:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 19:40:45 +0000   Mon, 26 Jun 2023 19:37:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 19:40:45 +0000   Mon, 26 Jun 2023 19:37:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 19:40:45 +0000   Mon, 26 Jun 2023 19:37:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 19:40:45 +0000   Mon, 26 Jun 2023 19:37:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    addons-118062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 afc5fbfbef004f339f794c5d2313eee5
	  System UUID:                afc5fbfb-ef00-4f33-9f79-4c5d2313eee5
	  Boot ID:                    cafe2d51-b0f2-4371-ba8b-53eb5fa4d899
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-vk64k         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-58478865f7-zdvq4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  headlamp                    headlamp-66f6498c69-22z64                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-5d78c9869d-9dfbm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m28s
	  kube-system                 etcd-addons-118062                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-apiserver-addons-118062             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-addons-118062    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-proxy-w9vvt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-addons-118062             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node addons-118062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node addons-118062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x7 over 4m51s)  kubelet          Node addons-118062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node addons-118062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node addons-118062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node addons-118062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m41s                  kubelet          Node addons-118062 status is now: NodeReady
	  Normal  RegisteredNode           4m29s                  node-controller  Node addons-118062 event: Registered Node addons-118062 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.228633] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.282728] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149303] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.009529] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.871695] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.112938] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.135558] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.098075] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.205530] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +8.969728] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[Jun26 19:37] systemd-fstab-generator[1248]: Ignoring "noauto" for root device
	[ +25.308495] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.897152] kauditd_printk_skb: 8 callbacks suppressed
	[Jun26 19:38] kauditd_printk_skb: 14 callbacks suppressed
	[ +36.750446] kauditd_printk_skb: 8 callbacks suppressed
	[  +8.627170] kauditd_printk_skb: 3 callbacks suppressed
	[Jun26 19:39] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.205419] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.543264] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.353454] kauditd_printk_skb: 5 callbacks suppressed
	[Jun26 19:40] kauditd_printk_skb: 2 callbacks suppressed
	[Jun26 19:41] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [8b7e5154f2e20ccc7e7907c58af36e897e5a216e8ec3da7d98bd2e6e2267e0e7] <==
	* {"level":"info","ts":"2023-06-26T19:38:14.661Z","caller":"traceutil/trace.go:171","msg":"trace[468483212] transaction","detail":"{read_only:false; response_revision:896; number_of_response:1; }","duration":"151.79074ms","start":"2023-06-26T19:38:14.509Z","end":"2023-06-26T19:38:14.661Z","steps":["trace[468483212] 'process raft request'  (duration: 151.602077ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:38:33.117Z","caller":"traceutil/trace.go:171","msg":"trace[285836059] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"245.394099ms","start":"2023-06-26T19:38:32.872Z","end":"2023-06-26T19:38:33.117Z","steps":["trace[285836059] 'process raft request'  (duration: 245.251473ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:38:33.120Z","caller":"traceutil/trace.go:171","msg":"trace[2026034514] linearizableReadLoop","detail":"{readStateIndex:992; appliedIndex:992; }","duration":"203.883293ms","start":"2023-06-26T19:38:32.916Z","end":"2023-06-26T19:38:33.120Z","steps":["trace[2026034514] 'read index received'  (duration: 203.879368ms)","trace[2026034514] 'applied index is now lower than readState.Index'  (duration: 3.461µs)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T19:38:33.120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.061046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-06-26T19:38:33.120Z","caller":"traceutil/trace.go:171","msg":"trace[625755736] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:961; }","duration":"204.106802ms","start":"2023-06-26T19:38:32.916Z","end":"2023-06-26T19:38:33.120Z","steps":["trace[625755736] 'agreement among raft nodes before linearized reading'  (duration: 204.017517ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:38:43.356Z","caller":"traceutil/trace.go:171","msg":"trace[773848871] linearizableReadLoop","detail":"{readStateIndex:1060; appliedIndex:1059; }","duration":"151.300134ms","start":"2023-06-26T19:38:43.205Z","end":"2023-06-26T19:38:43.356Z","steps":["trace[773848871] 'read index received'  (duration: 151.156976ms)","trace[773848871] 'applied index is now lower than readState.Index'  (duration: 142.776µs)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T19:38:43.357Z","caller":"traceutil/trace.go:171","msg":"trace[495572878] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"169.197064ms","start":"2023-06-26T19:38:43.188Z","end":"2023-06-26T19:38:43.357Z","steps":["trace[495572878] 'process raft request'  (duration: 168.679469ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T19:38:43.357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.015437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-06-26T19:38:43.357Z","caller":"traceutil/trace.go:171","msg":"trace[1409930052] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1027; }","duration":"152.092414ms","start":"2023-06-26T19:38:43.205Z","end":"2023-06-26T19:38:43.357Z","steps":["trace[1409930052] 'agreement among raft nodes before linearized reading'  (duration: 151.973329ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T19:38:43.358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.473054ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14194"}
	{"level":"info","ts":"2023-06-26T19:38:43.358Z","caller":"traceutil/trace.go:171","msg":"trace[282513454] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1027; }","duration":"122.970602ms","start":"2023-06-26T19:38:43.235Z","end":"2023-06-26T19:38:43.358Z","steps":["trace[282513454] 'agreement among raft nodes before linearized reading'  (duration: 122.416022ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T19:38:43.358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.055309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78333"}
	{"level":"info","ts":"2023-06-26T19:38:43.359Z","caller":"traceutil/trace.go:171","msg":"trace[2100286903] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1027; }","duration":"107.423283ms","start":"2023-06-26T19:38:43.251Z","end":"2023-06-26T19:38:43.359Z","steps":["trace[2100286903] 'agreement among raft nodes before linearized reading'  (duration: 106.967593ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:38:57.863Z","caller":"traceutil/trace.go:171","msg":"trace[1441194523] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"111.479921ms","start":"2023-06-26T19:38:57.751Z","end":"2023-06-26T19:38:57.863Z","steps":["trace[1441194523] 'read index received'  (duration: 111.349141ms)","trace[1441194523] 'applied index is now lower than readState.Index'  (duration: 130.4µs)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T19:38:57.863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.724301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78333"}
	{"level":"info","ts":"2023-06-26T19:38:57.863Z","caller":"traceutil/trace.go:171","msg":"trace[1328129852] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1101; }","duration":"111.79172ms","start":"2023-06-26T19:38:57.751Z","end":"2023-06-26T19:38:57.863Z","steps":["trace[1328129852] 'agreement among raft nodes before linearized reading'  (duration: 111.594487ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:38:57.863Z","caller":"traceutil/trace.go:171","msg":"trace[2123536569] transaction","detail":"{read_only:false; response_revision:1101; number_of_response:1; }","duration":"281.270025ms","start":"2023-06-26T19:38:57.582Z","end":"2023-06-26T19:38:57.863Z","steps":["trace[2123536569] 'process raft request'  (duration: 280.679913ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:39:07.843Z","caller":"traceutil/trace.go:171","msg":"trace[1392374263] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"324.968152ms","start":"2023-06-26T19:39:07.517Z","end":"2023-06-26T19:39:07.842Z","steps":["trace[1392374263] 'process raft request'  (duration: 324.820779ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T19:39:07.843Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T19:39:07.517Z","time spent":"325.859021ms","remote":"127.0.0.1:36270","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":716,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/hpvc.176c4cbe949da105\" mod_revision:1120 > success:<request_put:<key:\"/registry/events/default/hpvc.176c4cbe949da105\" value_size:652 lease:1818760428456160204 >> failure:<request_range:<key:\"/registry/events/default/hpvc.176c4cbe949da105\" > >"}
	{"level":"info","ts":"2023-06-26T19:39:21.348Z","caller":"traceutil/trace.go:171","msg":"trace[764320580] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"120.722011ms","start":"2023-06-26T19:39:21.227Z","end":"2023-06-26T19:39:21.348Z","steps":["trace[764320580] 'read index received'  (duration: 120.467567ms)","trace[764320580] 'applied index is now lower than readState.Index'  (duration: 253.961µs)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T19:39:21.348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.946403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:14 size:68070"}
	{"level":"info","ts":"2023-06-26T19:39:21.348Z","caller":"traceutil/trace.go:171","msg":"trace[364489546] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:14; response_revision:1294; }","duration":"121.017308ms","start":"2023-06-26T19:39:21.227Z","end":"2023-06-26T19:39:21.348Z","steps":["trace[364489546] 'agreement among raft nodes before linearized reading'  (duration: 120.805608ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:39:21.348Z","caller":"traceutil/trace.go:171","msg":"trace[1623981338] transaction","detail":"{read_only:false; response_revision:1294; number_of_response:1; }","duration":"156.05118ms","start":"2023-06-26T19:39:21.192Z","end":"2023-06-26T19:39:21.348Z","steps":["trace[1623981338] 'process raft request'  (duration: 155.282019ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:40:14.317Z","caller":"traceutil/trace.go:171","msg":"trace[882298426] transaction","detail":"{read_only:false; response_revision:1431; number_of_response:1; }","duration":"159.006519ms","start":"2023-06-26T19:40:14.158Z","end":"2023-06-26T19:40:14.317Z","steps":["trace[882298426] 'process raft request'  (duration: 158.66088ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T19:40:45.496Z","caller":"traceutil/trace.go:171","msg":"trace[1068997510] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"218.213017ms","start":"2023-06-26T19:40:45.278Z","end":"2023-06-26T19:40:45.496Z","steps":["trace[1068997510] 'process raft request'  (duration: 217.878912ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a8edc14e5dda2ecafc8f607d4ff42808d5e02442c1f79ef8a2852abe67da4568] <==
	* 2023/06/26 19:38:54 GCP Auth Webhook started!
	2023/06/26 19:39:02 Ready to marshal response ...
	2023/06/26 19:39:02 Ready to write response ...
	2023/06/26 19:39:02 Ready to marshal response ...
	2023/06/26 19:39:02 Ready to write response ...
	2023/06/26 19:39:02 Ready to marshal response ...
	2023/06/26 19:39:02 Ready to write response ...
	2023/06/26 19:39:10 Ready to marshal response ...
	2023/06/26 19:39:10 Ready to write response ...
	2023/06/26 19:39:16 Ready to marshal response ...
	2023/06/26 19:39:16 Ready to write response ...
	2023/06/26 19:39:23 Ready to marshal response ...
	2023/06/26 19:39:23 Ready to write response ...
	2023/06/26 19:40:07 Ready to marshal response ...
	2023/06/26 19:40:07 Ready to write response ...
	2023/06/26 19:40:45 Ready to marshal response ...
	2023/06/26 19:40:45 Ready to write response ...
	2023/06/26 19:41:41 Ready to marshal response ...
	2023/06/26 19:41:41 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:41:51 up 5 min,  0 users,  load average: 1.00, 1.97, 1.03
	Linux addons-118062 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6d53bd003cb3dc9dce7433453c3b22d63b641c61e07853822df43102f9cb2e30] <==
	* W0626 19:40:15.068897       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 19:40:15.068952       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 19:40:15.068985       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 19:40:22.268955       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0626 19:41:04.109763       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.110009       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.128633       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.128712       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.147129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.147280       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.159060       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.159115       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.166342       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.166412       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.193300       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.193396       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.222202       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.222342       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 19:41:04.236716       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 19:41:04.236934       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0626 19:41:05.159639       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0626 19:41:05.238497       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0626 19:41:05.254903       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0626 19:41:41.280543       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.105.101.40]
	
	* 
	* ==> kube-controller-manager [3157c99719fa6201075f5c4d113bc6f6f69ab82b4b758d5135bca5042d222587] <==
	* E0626 19:41:13.825956       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 19:41:14.827194       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:14.827285       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 19:41:20.582257       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:20.582469       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0626 19:41:22.668403       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0626 19:41:22.668667       1 shared_informer.go:318] Caches are synced for resource quota
	W0626 19:41:23.014709       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:23.014895       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0626 19:41:23.096444       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0626 19:41:23.096624       1 shared_informer.go:318] Caches are synced for garbage collector
	W0626 19:41:27.415183       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:27.415247       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 19:41:37.268376       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:37.268434       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 19:41:38.319332       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:38.319395       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0626 19:41:41.022559       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0626 19:41:41.071064       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-vk64k"
	I0626 19:41:43.633436       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0626 19:41:43.644600       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0626 19:41:45.479159       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:45.479262       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 19:41:49.290103       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 19:41:49.290159       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [792ad5629bc46e4dede087b00f659da8b89f9f1da8492b1c1a685e82edabbacb] <==
	* I0626 19:37:42.661621       1 node.go:141] Successfully retrieved node IP: 192.168.39.92
	I0626 19:37:42.662058       1 server_others.go:110] "Detected node IP" address="192.168.39.92"
	I0626 19:37:42.662118       1 server_others.go:554] "Using iptables proxy"
	I0626 19:37:42.965918       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 19:37:42.965988       1 server_others.go:192] "Using iptables Proxier"
	I0626 19:37:42.966039       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 19:37:42.966671       1 server.go:658] "Version info" version="v1.27.3"
	I0626 19:37:42.972755       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 19:37:42.977119       1 config.go:188] "Starting service config controller"
	I0626 19:37:42.977204       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 19:37:42.977300       1 config.go:97] "Starting endpoint slice config controller"
	I0626 19:37:42.977352       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 19:37:42.994843       1 config.go:315] "Starting node config controller"
	I0626 19:37:42.994930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 19:37:43.077697       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 19:37:43.077892       1 shared_informer.go:318] Caches are synced for service config
	I0626 19:37:43.108716       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3dbac1ed2f4fa7fde361617847e82108e65d3b49783e71c3ef1380c1194bca67] <==
	* W0626 19:37:06.612147       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 19:37:06.612176       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 19:37:06.612276       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 19:37:06.612310       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 19:37:07.422461       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 19:37:07.422592       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0626 19:37:07.545161       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 19:37:07.545280       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 19:37:07.568779       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 19:37:07.568975       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0626 19:37:07.639620       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 19:37:07.639755       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 19:37:07.652531       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 19:37:07.652621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 19:37:07.694952       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 19:37:07.695063       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 19:37:07.714130       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 19:37:07.714152       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 19:37:07.725887       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 19:37:07.726022       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 19:37:07.752163       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 19:37:07.752214       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 19:37:07.814837       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 19:37:07.814929       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0626 19:37:09.284545       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 19:36:36 UTC, ends at Mon 2023-06-26 19:41:52 UTC. --
	Jun 26 19:41:41 addons-118062 kubelet[1258]: I0626 19:41:41.087245    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="10d7be63-0e4b-4436-ace8-c42f7ecfe6f5" containerName="volume-snapshot-controller"
	Jun 26 19:41:41 addons-118062 kubelet[1258]: I0626 19:41:41.087251    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="a271cda6-54ab-4469-93b2-edfae1c59a49" containerName="liveness-probe"
	Jun 26 19:41:41 addons-118062 kubelet[1258]: I0626 19:41:41.087257    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="fc7f4613-39b6-4c42-89b8-33bfc7685209" containerName="csi-attacher"
	Jun 26 19:41:41 addons-118062 kubelet[1258]: I0626 19:41:41.220206    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-559d4\" (UniqueName: \"kubernetes.io/projected/4bca1389-ea78-4dac-b549-e19fd9eb37e4-kube-api-access-559d4\") pod \"hello-world-app-65bdb79f98-vk64k\" (UID: \"4bca1389-ea78-4dac-b549-e19fd9eb37e4\") " pod="default/hello-world-app-65bdb79f98-vk64k"
	Jun 26 19:41:41 addons-118062 kubelet[1258]: I0626 19:41:41.220279    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4bca1389-ea78-4dac-b549-e19fd9eb37e4-gcp-creds\") pod \"hello-world-app-65bdb79f98-vk64k\" (UID: \"4bca1389-ea78-4dac-b549-e19fd9eb37e4\") " pod="default/hello-world-app-65bdb79f98-vk64k"
	Jun 26 19:41:42 addons-118062 kubelet[1258]: I0626 19:41:42.227531    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdj7\" (UniqueName: \"kubernetes.io/projected/29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c-kube-api-access-fkdj7\") pod \"29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c\" (UID: \"29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c\") "
	Jun 26 19:41:42 addons-118062 kubelet[1258]: I0626 19:41:42.240082    1258 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c-kube-api-access-fkdj7" (OuterVolumeSpecName: "kube-api-access-fkdj7") pod "29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c" (UID: "29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c"). InnerVolumeSpecName "kube-api-access-fkdj7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 26 19:41:42 addons-118062 kubelet[1258]: I0626 19:41:42.328769    1258 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fkdj7\" (UniqueName: \"kubernetes.io/projected/29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c-kube-api-access-fkdj7\") on node \"addons-118062\" DevicePath \"\""
	Jun 26 19:41:43 addons-118062 kubelet[1258]: I0626 19:41:43.143213    1258 scope.go:115] "RemoveContainer" containerID="87ec51d66b7fb5e9da11f85bde1a0ba31a08b48326e1558010c2baabde8f659d"
	Jun 26 19:41:43 addons-118062 kubelet[1258]: E0626 19:41:43.681662    1258 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7b4698b8c7-w5knv.176c4ce44632d20f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7b4698b8c7-w5knv", UID:"4a8fc666-5c60-4b55-bb05-d8c7e51b55c8", APIVersion:"v1", ResourceVersion:"670", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-118062"}, FirstTimestamp:time.Date(2023, time.June, 26, 19, 41, 43, 671083535, time.Local), LastTimestamp:time.Date(2023, time.June, 26, 19, 41, 43, 671083535, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7b4698b8c7-w5knv.176c4ce44632d20f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 26 19:41:44 addons-118062 kubelet[1258]: I0626 19:41:44.211916    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c path="/var/lib/kubelet/pods/29ff2a6a-fe44-4284-9c3e-2b7c4fb0e97c/volumes"
	Jun 26 19:41:44 addons-118062 kubelet[1258]: I0626 19:41:44.214890    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff path="/var/lib/kubelet/pods/4a8c2ec8-bdea-4fa0-9579-6aaa541ffbff/volumes"
	Jun 26 19:41:44 addons-118062 kubelet[1258]: I0626 19:41:44.218377    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=fd62a62e-8d69-4a78-8a9d-8d0bec28d653 path="/var/lib/kubelet/pods/fd62a62e-8d69-4a78-8a9d-8d0bec28d653/volumes"
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.153009    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rk8x\" (UniqueName: \"kubernetes.io/projected/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8-kube-api-access-2rk8x\") pod \"4a8fc666-5c60-4b55-bb05-d8c7e51b55c8\" (UID: \"4a8fc666-5c60-4b55-bb05-d8c7e51b55c8\") "
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.153080    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8-webhook-cert\") pod \"4a8fc666-5c60-4b55-bb05-d8c7e51b55c8\" (UID: \"4a8fc666-5c60-4b55-bb05-d8c7e51b55c8\") "
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.159254    1258 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4a8fc666-5c60-4b55-bb05-d8c7e51b55c8" (UID: "4a8fc666-5c60-4b55-bb05-d8c7e51b55c8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.162861    1258 scope.go:115] "RemoveContainer" containerID="052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441"
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.169241    1258 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8-kube-api-access-2rk8x" (OuterVolumeSpecName: "kube-api-access-2rk8x") pod "4a8fc666-5c60-4b55-bb05-d8c7e51b55c8" (UID: "4a8fc666-5c60-4b55-bb05-d8c7e51b55c8"). InnerVolumeSpecName "kube-api-access-2rk8x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.183212    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-vk64k" podStartSLOduration=1.668573076 podCreationTimestamp="2023-06-26 19:41:41 +0000 UTC" firstStartedPulling="2023-06-26 19:41:42.224573824 +0000 UTC m=+272.185794437" lastFinishedPulling="2023-06-26 19:41:44.739174883 +0000 UTC m=+274.700395498" observedRunningTime="2023-06-26 19:41:45.182935181 +0000 UTC m=+275.144155806" watchObservedRunningTime="2023-06-26 19:41:45.183174137 +0000 UTC m=+275.144394769"
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.216442    1258 scope.go:115] "RemoveContainer" containerID="052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441"
	Jun 26 19:41:45 addons-118062 kubelet[1258]: E0626 19:41:45.217268    1258 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441\": container with ID starting with 052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441 not found: ID does not exist" containerID="052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441"
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.217354    1258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441} err="failed to get container status \"052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441\": rpc error: code = NotFound desc = could not find container \"052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441\": container with ID starting with 052630dac5f6e15a394d50034df30246b665a5f1ab5489aa3f1b60eb73b69441 not found: ID does not exist"
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.253873    1258 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2rk8x\" (UniqueName: \"kubernetes.io/projected/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8-kube-api-access-2rk8x\") on node \"addons-118062\" DevicePath \"\""
	Jun 26 19:41:45 addons-118062 kubelet[1258]: I0626 19:41:45.253950    1258 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8-webhook-cert\") on node \"addons-118062\" DevicePath \"\""
	Jun 26 19:41:46 addons-118062 kubelet[1258]: I0626 19:41:46.210574    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4a8fc666-5c60-4b55-bb05-d8c7e51b55c8 path="/var/lib/kubelet/pods/4a8fc666-5c60-4b55-bb05-d8c7e51b55c8/volumes"
	
	* 
	* ==> storage-provisioner [0a69307f0369235246ea00e1925febefabe0b89a9e25e2e215e896a45b1e2c3a] <==
	* I0626 19:37:43.079900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0626 19:38:13.115127       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [9736cd104706ce508c566b9fb13c77e6b3d34815392d23fdd55d16fe5b32364c] <==
	* I0626 19:38:14.555121       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 19:38:14.596367       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 19:38:14.596430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 19:38:14.687114       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 19:38:14.690605       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-118062_a8ff2c8c-6e37-4e08-bb4c-a95988080401!
	I0626 19:38:14.694466       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2fe4010c-968c-4b6c-b290-734479f48dcd", APIVersion:"v1", ResourceVersion:"898", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-118062_a8ff2c8c-6e37-4e08-bb4c-a95988080401 became leader
	I0626 19:38:14.791073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-118062_a8ff2c8c-6e37-4e08-bb4c-a95988080401!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-118062 -n addons-118062
helpers_test.go:261: (dbg) Run:  kubectl --context addons-118062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (157.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (140.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-118062
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-118062: exit status 82 (2m1.702494465s)

                                                
                                                
-- stdout --
	* Stopping node "addons-118062"  ...
	* Stopping node "addons-118062"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-118062" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-118062
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-118062: exit status 10 (18.432664506s)

                                                
                                                
-- stdout --
	* dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-118062" : exit status 10
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-118062
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-118062
--- FAIL: TestAddons/StoppedEnableDisable (140.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (164.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-759751 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-759751 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.715013644s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-759751 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-759751 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9e453c39-db4b-4ce0-bf5b-62570a4cdb5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9e453c39-db4b-4ce0-bf5b-62570a4cdb5b] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.011587189s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0626 19:53:30.705240   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:30.710557   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:30.720874   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:30.741150   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:30.781450   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:30.861848   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:31.022253   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:31.342855   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:31.983801   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:33.264302   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:35.825306   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:40.946028   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:53:51.187158   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:54:00.824716   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 19:54:11.667698   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-759751 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.064436401s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-759751 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.7
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons disable ingress-dns --alsologtostderr -v=1: (2.258228919s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons disable ingress --alsologtostderr -v=1
E0626 19:54:28.510038   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons disable ingress --alsologtostderr -v=1: (7.482027365s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-759751 -n ingress-addon-legacy-759751
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-759751 logs -n 25: (1.057252454s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-244475                                                         | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-244475 image load --daemon                                     | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-244475                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-244475                                                         | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-244475 image ls                                                | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	| image          | functional-244475 image save                                              | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-244475                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475 image rm                                                | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-244475                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475 image ls                                                | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	| image          | functional-244475 image load                                              | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475 image ls                                                | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	| image          | functional-244475 image save --daemon                                     | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-244475                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475                                                         | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-244475 ssh pgrep                                               | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-244475                                                         | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475                                                         | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475 image build -t                                          | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | localhost/my-image:functional-244475                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-244475                                                         | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-244475 image ls                                                | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	| delete         | -p functional-244475                                                      | functional-244475           | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:49 UTC |
	| start          | -p ingress-addon-legacy-759751                                            | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:49 UTC | 26 Jun 23 19:51 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-759751                                               | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:51 UTC | 26 Jun 23 19:51 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-759751                                               | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:51 UTC | 26 Jun 23 19:51 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-759751                                               | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:52 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-759751 ip                                            | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:54 UTC | 26 Jun 23 19:54 UTC |
	| addons         | ingress-addon-legacy-759751                                               | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:54 UTC | 26 Jun 23 19:54 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-759751                                               | ingress-addon-legacy-759751 | jenkins | v1.30.1 | 26 Jun 23 19:54 UTC | 26 Jun 23 19:54 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 19:49:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 19:49:28.784304   22992 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:49:28.784411   22992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:49:28.784421   22992 out.go:309] Setting ErrFile to fd 2...
	I0626 19:49:28.784426   22992 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:49:28.784537   22992 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 19:49:28.785049   22992 out.go:303] Setting JSON to false
	I0626 19:49:28.785820   22992 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1916,"bootTime":1687807053,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:49:28.785875   22992 start.go:137] virtualization: kvm guest
	I0626 19:49:28.787969   22992 out.go:177] * [ingress-addon-legacy-759751] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:49:28.789329   22992 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 19:49:28.789370   22992 notify.go:220] Checking for updates...
	I0626 19:49:28.790859   22992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:49:28.792541   22992 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:49:28.793856   22992 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:49:28.795315   22992 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 19:49:28.796660   22992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 19:49:28.798104   22992 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:49:28.832504   22992 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 19:49:28.834060   22992 start.go:297] selected driver: kvm2
	I0626 19:49:28.834072   22992 start.go:954] validating driver "kvm2" against <nil>
	I0626 19:49:28.834081   22992 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 19:49:28.834698   22992 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:49:28.834764   22992 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 19:49:28.848427   22992 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 19:49:28.848469   22992 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 19:49:28.848647   22992 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 19:49:28.848675   22992 cni.go:84] Creating CNI manager for ""
	I0626 19:49:28.848686   22992 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:49:28.848695   22992 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0626 19:49:28.848710   22992 start_flags.go:319] config:
	{Name:ingress-addon-legacy-759751 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-759751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:49:28.848817   22992 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:49:28.850705   22992 out.go:177] * Starting control plane node ingress-addon-legacy-759751 in cluster ingress-addon-legacy-759751
	I0626 19:49:28.852224   22992 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 19:49:29.364095   22992 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0626 19:49:29.364146   22992 cache.go:57] Caching tarball of preloaded images
	I0626 19:49:29.364327   22992 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 19:49:29.366582   22992 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0626 19:49:29.368200   22992 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:49:29.479619   22992 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0626 19:49:44.347902   22992 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:49:44.348002   22992 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:49:45.289761   22992 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0626 19:49:45.290082   22992 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/config.json ...
	I0626 19:49:45.290110   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/config.json: {Name:mk646db0c060ac9b452a163a306c08e02caabe92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:49:45.290270   22992 start.go:365] acquiring machines lock for ingress-addon-legacy-759751: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 19:49:45.290300   22992 start.go:369] acquired machines lock for "ingress-addon-legacy-759751" in 14.723µs
	I0626 19:49:45.290315   22992 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-759751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-759751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 19:49:45.290380   22992 start.go:125] createHost starting for "" (driver="kvm2")
	I0626 19:49:45.292714   22992 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0626 19:49:45.292860   22992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:49:45.292886   22992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:49:45.306576   22992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0626 19:49:45.307016   22992 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:49:45.307585   22992 main.go:141] libmachine: Using API Version  1
	I0626 19:49:45.307608   22992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:49:45.307953   22992 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:49:45.308143   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetMachineName
	I0626 19:49:45.308278   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:49:45.308408   22992 start.go:159] libmachine.API.Create for "ingress-addon-legacy-759751" (driver="kvm2")
	I0626 19:49:45.308457   22992 client.go:168] LocalClient.Create starting
	I0626 19:49:45.308485   22992 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem
	I0626 19:49:45.308515   22992 main.go:141] libmachine: Decoding PEM data...
	I0626 19:49:45.308531   22992 main.go:141] libmachine: Parsing certificate...
	I0626 19:49:45.308583   22992 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem
	I0626 19:49:45.308603   22992 main.go:141] libmachine: Decoding PEM data...
	I0626 19:49:45.308616   22992 main.go:141] libmachine: Parsing certificate...
	I0626 19:49:45.308633   22992 main.go:141] libmachine: Running pre-create checks...
	I0626 19:49:45.308643   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .PreCreateCheck
	I0626 19:49:45.308941   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetConfigRaw
	I0626 19:49:45.309265   22992 main.go:141] libmachine: Creating machine...
	I0626 19:49:45.309279   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .Create
	I0626 19:49:45.309431   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Creating KVM machine...
	I0626 19:49:45.310715   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found existing default KVM network
	I0626 19:49:45.311331   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:45.311208   23051 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029890}
	I0626 19:49:45.316661   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | trying to create private KVM network mk-ingress-addon-legacy-759751 192.168.39.0/24...
	I0626 19:49:45.385327   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | private KVM network mk-ingress-addon-legacy-759751 192.168.39.0/24 created
	I0626 19:49:45.385360   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting up store path in /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751 ...
	I0626 19:49:45.385397   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:45.385289   23051 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:49:45.385438   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Building disk image from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 19:49:45.385562   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Downloading /home/jenkins/minikube-integration/16761-7242/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso...
	I0626 19:49:45.580961   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:45.580852   23051 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa...
	I0626 19:49:45.867262   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:45.867143   23051 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/ingress-addon-legacy-759751.rawdisk...
	I0626 19:49:45.867298   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Writing magic tar header
	I0626 19:49:45.867315   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Writing SSH key tar header
	I0626 19:49:45.867328   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:45.867258   23051 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751 ...
	I0626 19:49:45.867384   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751
	I0626 19:49:45.867441   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751 (perms=drwx------)
	I0626 19:49:45.867461   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines (perms=drwxr-xr-x)
	I0626 19:49:45.867470   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines
	I0626 19:49:45.867483   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:49:45.867493   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242
	I0626 19:49:45.867503   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0626 19:49:45.867511   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home/jenkins
	I0626 19:49:45.867520   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube (perms=drwxr-xr-x)
	I0626 19:49:45.867532   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242 (perms=drwxrwxr-x)
	I0626 19:49:45.867539   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0626 19:49:45.867557   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Checking permissions on dir: /home
	I0626 19:49:45.867568   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Skipping /home - not owner
	I0626 19:49:45.867578   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0626 19:49:45.867594   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Creating domain...
	I0626 19:49:45.868829   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) define libvirt domain using xml: 
	I0626 19:49:45.868859   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) <domain type='kvm'>
	I0626 19:49:45.868869   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <name>ingress-addon-legacy-759751</name>
	I0626 19:49:45.868879   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <memory unit='MiB'>4096</memory>
	I0626 19:49:45.868897   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <vcpu>2</vcpu>
	I0626 19:49:45.868905   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <features>
	I0626 19:49:45.868911   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <acpi/>
	I0626 19:49:45.868918   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <apic/>
	I0626 19:49:45.868925   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <pae/>
	I0626 19:49:45.868930   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     
	I0626 19:49:45.868938   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   </features>
	I0626 19:49:45.868943   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <cpu mode='host-passthrough'>
	I0626 19:49:45.868950   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   
	I0626 19:49:45.868962   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   </cpu>
	I0626 19:49:45.868974   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <os>
	I0626 19:49:45.868997   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <type>hvm</type>
	I0626 19:49:45.869012   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <boot dev='cdrom'/>
	I0626 19:49:45.869022   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <boot dev='hd'/>
	I0626 19:49:45.869031   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <bootmenu enable='no'/>
	I0626 19:49:45.869039   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   </os>
	I0626 19:49:45.869045   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   <devices>
	I0626 19:49:45.869051   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <disk type='file' device='cdrom'>
	I0626 19:49:45.869067   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/boot2docker.iso'/>
	I0626 19:49:45.869081   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <target dev='hdc' bus='scsi'/>
	I0626 19:49:45.869092   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <readonly/>
	I0626 19:49:45.869102   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </disk>
	I0626 19:49:45.869110   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <disk type='file' device='disk'>
	I0626 19:49:45.869118   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0626 19:49:45.869128   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/ingress-addon-legacy-759751.rawdisk'/>
	I0626 19:49:45.869137   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <target dev='hda' bus='virtio'/>
	I0626 19:49:45.869152   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </disk>
	I0626 19:49:45.869168   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <interface type='network'>
	I0626 19:49:45.869185   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <source network='mk-ingress-addon-legacy-759751'/>
	I0626 19:49:45.869197   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <model type='virtio'/>
	I0626 19:49:45.869209   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </interface>
	I0626 19:49:45.869219   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <interface type='network'>
	I0626 19:49:45.869235   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <source network='default'/>
	I0626 19:49:45.869252   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <model type='virtio'/>
	I0626 19:49:45.869266   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </interface>
	I0626 19:49:45.869275   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <serial type='pty'>
	I0626 19:49:45.869289   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <target port='0'/>
	I0626 19:49:45.869305   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </serial>
	I0626 19:49:45.869316   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <console type='pty'>
	I0626 19:49:45.869442   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <target type='serial' port='0'/>
	I0626 19:49:45.869468   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </console>
	I0626 19:49:45.869479   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     <rng model='virtio'>
	I0626 19:49:45.869494   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)       <backend model='random'>/dev/random</backend>
	I0626 19:49:45.869511   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     </rng>
	I0626 19:49:45.869524   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     
	I0626 19:49:45.869536   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)     
	I0626 19:49:45.869549   22992 main.go:141] libmachine: (ingress-addon-legacy-759751)   </devices>
	I0626 19:49:45.869561   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) </domain>
	I0626 19:49:45.869577   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) 
	I0626 19:49:45.874300   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:8c:ce:01 in network default
	I0626 19:49:45.874886   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Ensuring networks are active...
	I0626 19:49:45.874912   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:45.875556   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Ensuring network default is active
	I0626 19:49:45.875889   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Ensuring network mk-ingress-addon-legacy-759751 is active
	I0626 19:49:45.876420   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Getting domain xml...
	I0626 19:49:45.877107   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Creating domain...
	I0626 19:49:47.074893   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Waiting to get IP...
	I0626 19:49:47.075852   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:47.076339   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:47.076359   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:47.076295   23051 retry.go:31] will retry after 310.494801ms: waiting for machine to come up
	I0626 19:49:47.388894   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:47.389365   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:47.389406   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:47.389305   23051 retry.go:31] will retry after 305.538014ms: waiting for machine to come up
	I0626 19:49:47.696784   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:47.697187   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:47.697225   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:47.697143   23051 retry.go:31] will retry after 467.475ms: waiting for machine to come up
	I0626 19:49:48.165753   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:48.166175   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:48.166211   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:48.166136   23051 retry.go:31] will retry after 540.036558ms: waiting for machine to come up
	I0626 19:49:48.707829   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:48.708237   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:48.708267   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:48.708196   23051 retry.go:31] will retry after 581.109793ms: waiting for machine to come up
	I0626 19:49:49.290813   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:49.291255   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:49.291283   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:49.291189   23051 retry.go:31] will retry after 794.549857ms: waiting for machine to come up
	I0626 19:49:50.087032   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:50.087507   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:50.087536   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:50.087454   23051 retry.go:31] will retry after 1.187072335s: waiting for machine to come up
	I0626 19:49:51.275864   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:51.276238   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:51.276271   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:51.276218   23051 retry.go:31] will retry after 1.273464597s: waiting for machine to come up
	I0626 19:49:52.551511   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:52.551973   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:52.551995   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:52.551918   23051 retry.go:31] will retry after 1.717409178s: waiting for machine to come up
	I0626 19:49:54.270319   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:54.270774   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:54.270833   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:54.270752   23051 retry.go:31] will retry after 1.702010696s: waiting for machine to come up
	I0626 19:49:55.973881   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:55.974359   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:55.974385   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:55.974289   23051 retry.go:31] will retry after 2.668801241s: waiting for machine to come up
	I0626 19:49:58.644174   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:49:58.644613   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:49:58.644646   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:49:58.644561   23051 retry.go:31] will retry after 3.148942057s: waiting for machine to come up
	I0626 19:50:01.795445   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:01.795843   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:50:01.795865   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:50:01.795794   23051 retry.go:31] will retry after 3.909393942s: waiting for machine to come up
	I0626 19:50:05.706762   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:05.707241   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find current IP address of domain ingress-addon-legacy-759751 in network mk-ingress-addon-legacy-759751
	I0626 19:50:05.707264   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | I0626 19:50:05.707197   23051 retry.go:31] will retry after 5.414275068s: waiting for machine to come up
	I0626 19:50:11.123782   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.124375   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Found IP for machine: 192.168.39.7
	I0626 19:50:11.124395   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Reserving static IP address...
	I0626 19:50:11.124411   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has current primary IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.124867   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-759751", mac: "52:54:00:c8:4e:ba", ip: "192.168.39.7"} in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.196378   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Getting to WaitForSSH function...
	I0626 19:50:11.196473   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Reserved static IP address: 192.168.39.7
	I0626 19:50:11.196493   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Waiting for SSH to be available...
	I0626 19:50:11.199105   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.199520   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.199556   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.199665   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Using SSH client type: external
	I0626 19:50:11.199694   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa (-rw-------)
	I0626 19:50:11.199733   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 19:50:11.199753   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | About to run SSH command:
	I0626 19:50:11.199768   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | exit 0
	I0626 19:50:11.289096   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | SSH cmd err, output: <nil>: 
	I0626 19:50:11.289434   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) KVM machine creation complete!
	I0626 19:50:11.289793   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetConfigRaw
	I0626 19:50:11.290387   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:11.290625   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:11.290789   22992 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0626 19:50:11.290809   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetState
	I0626 19:50:11.292358   22992 main.go:141] libmachine: Detecting operating system of created instance...
	I0626 19:50:11.292377   22992 main.go:141] libmachine: Waiting for SSH to be available...
	I0626 19:50:11.292386   22992 main.go:141] libmachine: Getting to WaitForSSH function...
	I0626 19:50:11.292394   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:11.294762   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.295289   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.295320   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.295401   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:11.295571   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.295705   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.295842   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:11.296048   22992 main.go:141] libmachine: Using SSH client type: native
	I0626 19:50:11.296483   22992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0626 19:50:11.296498   22992 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0626 19:50:11.416479   22992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:50:11.416500   22992 main.go:141] libmachine: Detecting the provisioner...
	I0626 19:50:11.416508   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:11.419558   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.419821   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.419853   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.420031   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:11.420217   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.420414   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.420573   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:11.420721   22992 main.go:141] libmachine: Using SSH client type: native
	I0626 19:50:11.421099   22992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0626 19:50:11.421111   22992 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0626 19:50:11.542365   22992 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2e95ab-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0626 19:50:11.542462   22992 main.go:141] libmachine: found compatible host: buildroot
	I0626 19:50:11.542477   22992 main.go:141] libmachine: Provisioning with buildroot...
	I0626 19:50:11.542492   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetMachineName
	I0626 19:50:11.542795   22992 buildroot.go:166] provisioning hostname "ingress-addon-legacy-759751"
	I0626 19:50:11.542817   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetMachineName
	I0626 19:50:11.543031   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:11.545671   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.546046   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.546078   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.546239   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:11.546435   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.546609   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.546747   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:11.546937   22992 main.go:141] libmachine: Using SSH client type: native
	I0626 19:50:11.547318   22992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0626 19:50:11.547332   22992 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-759751 && echo "ingress-addon-legacy-759751" | sudo tee /etc/hostname
	I0626 19:50:11.682391   22992 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-759751
	
	I0626 19:50:11.682420   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:11.685183   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.685576   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.685606   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.685760   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:11.685948   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.686140   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.686278   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:11.686419   22992 main.go:141] libmachine: Using SSH client type: native
	I0626 19:50:11.686798   22992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0626 19:50:11.686815   22992 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-759751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-759751/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-759751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 19:50:11.813679   22992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:50:11.813702   22992 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 19:50:11.813718   22992 buildroot.go:174] setting up certificates
	I0626 19:50:11.813727   22992 provision.go:83] configureAuth start
	I0626 19:50:11.813735   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetMachineName
	I0626 19:50:11.814006   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetIP
	I0626 19:50:11.816537   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.816955   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.816980   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.817142   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:11.819618   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.819986   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.820019   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.820115   22992 provision.go:138] copyHostCerts
	I0626 19:50:11.820149   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 19:50:11.820200   22992 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 19:50:11.820211   22992 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 19:50:11.820282   22992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 19:50:11.820411   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 19:50:11.820439   22992 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 19:50:11.820445   22992 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 19:50:11.820490   22992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 19:50:11.820555   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 19:50:11.820575   22992 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 19:50:11.820578   22992 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 19:50:11.820599   22992 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 19:50:11.820644   22992 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-759751 san=[192.168.39.7 192.168.39.7 localhost 127.0.0.1 minikube ingress-addon-legacy-759751]
	I0626 19:50:11.978340   22992 provision.go:172] copyRemoteCerts
	I0626 19:50:11.978393   22992 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 19:50:11.978413   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:11.981042   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.981370   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:11.981416   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:11.981566   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:11.981774   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:11.981949   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:11.982222   22992 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa Username:docker}
	I0626 19:50:12.070471   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 19:50:12.070535   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 19:50:12.093570   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 19:50:12.093651   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 19:50:12.116096   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 19:50:12.116183   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 19:50:12.138732   22992 provision.go:86] duration metric: configureAuth took 324.978541ms
	I0626 19:50:12.138771   22992 buildroot.go:189] setting minikube options for container-runtime
	I0626 19:50:12.138969   22992 config.go:182] Loaded profile config "ingress-addon-legacy-759751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0626 19:50:12.139056   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:12.141950   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.142360   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.142385   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.142585   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:12.142761   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.142929   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.143120   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:12.143307   22992 main.go:141] libmachine: Using SSH client type: native
	I0626 19:50:12.143702   22992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0626 19:50:12.143719   22992 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 19:50:12.449608   22992 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 19:50:12.449640   22992 main.go:141] libmachine: Checking connection to Docker...
	I0626 19:50:12.449650   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetURL
	I0626 19:50:12.450925   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Using libvirt version 6000000
	I0626 19:50:12.453346   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.453769   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.453802   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.453971   22992 main.go:141] libmachine: Docker is up and running!
	I0626 19:50:12.453986   22992 main.go:141] libmachine: Reticulating splines...
	I0626 19:50:12.453993   22992 client.go:171] LocalClient.Create took 27.145528805s
	I0626 19:50:12.454018   22992 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-759751" took 27.145611046s
	I0626 19:50:12.454038   22992 start.go:300] post-start starting for "ingress-addon-legacy-759751" (driver="kvm2")
	I0626 19:50:12.454054   22992 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 19:50:12.454078   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:12.454333   22992 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 19:50:12.454364   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:12.456868   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.457184   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.457222   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.457410   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:12.457600   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.457796   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:12.457936   22992 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa Username:docker}
	I0626 19:50:12.547032   22992 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 19:50:12.551748   22992 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 19:50:12.551777   22992 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 19:50:12.551855   22992 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 19:50:12.552001   22992 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 19:50:12.552017   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /etc/ssl/certs/144432.pem
	I0626 19:50:12.552130   22992 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 19:50:12.560713   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 19:50:12.584840   22992 start.go:303] post-start completed in 130.786288ms
	I0626 19:50:12.584895   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetConfigRaw
	I0626 19:50:12.585626   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetIP
	I0626 19:50:12.588497   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.588891   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.588939   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.589168   22992 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/config.json ...
	I0626 19:50:12.589367   22992 start.go:128] duration metric: createHost completed in 27.298980562s
	I0626 19:50:12.589420   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:12.591698   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.592031   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.592061   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.592198   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:12.592378   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.592536   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.592648   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:12.592778   22992 main.go:141] libmachine: Using SSH client type: native
	I0626 19:50:12.593169   22992 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0626 19:50:12.593181   22992 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 19:50:12.713989   22992 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687809012.702231422
	
	I0626 19:50:12.714011   22992 fix.go:206] guest clock: 1687809012.702231422
	I0626 19:50:12.714020   22992 fix.go:219] Guest: 2023-06-26 19:50:12.702231422 +0000 UTC Remote: 2023-06-26 19:50:12.589396763 +0000 UTC m=+43.835265909 (delta=112.834659ms)
	I0626 19:50:12.714046   22992 fix.go:190] guest clock delta is within tolerance: 112.834659ms
	I0626 19:50:12.714054   22992 start.go:83] releasing machines lock for "ingress-addon-legacy-759751", held for 27.423744986s
	I0626 19:50:12.714076   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:12.714347   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetIP
	I0626 19:50:12.716942   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.717309   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.717345   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.717583   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:12.718092   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:12.718255   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:12.718312   22992 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 19:50:12.718365   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:12.718463   22992 ssh_runner.go:195] Run: cat /version.json
	I0626 19:50:12.718480   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:12.720879   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.720914   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.721190   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.721233   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.721266   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:12.721291   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:12.721367   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:12.721484   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:12.721561   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.721626   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:12.721699   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:12.721782   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:12.721842   22992 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa Username:docker}
	I0626 19:50:12.721913   22992 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa Username:docker}
	I0626 19:50:12.806902   22992 ssh_runner.go:195] Run: systemctl --version
	I0626 19:50:12.831294   22992 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 19:50:12.995205   22992 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 19:50:13.001016   22992 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 19:50:13.001076   22992 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 19:50:13.017589   22992 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 19:50:13.017624   22992 start.go:466] detecting cgroup driver to use...
	I0626 19:50:13.017676   22992 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 19:50:13.030545   22992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 19:50:13.043951   22992 docker.go:196] disabling cri-docker service (if available) ...
	I0626 19:50:13.044002   22992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 19:50:13.059775   22992 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 19:50:13.076045   22992 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 19:50:13.180496   22992 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 19:50:13.299590   22992 docker.go:212] disabling docker service ...
	I0626 19:50:13.299659   22992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 19:50:13.313834   22992 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 19:50:13.324993   22992 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 19:50:13.435641   22992 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 19:50:13.544860   22992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 19:50:13.557154   22992 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 19:50:13.574465   22992 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0626 19:50:13.574523   22992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:50:13.584160   22992 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 19:50:13.584240   22992 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:50:13.593605   22992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:50:13.602564   22992 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:50:13.611568   22992 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 19:50:13.620752   22992 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 19:50:13.628762   22992 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 19:50:13.628847   22992 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 19:50:13.641673   22992 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 19:50:13.650346   22992 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 19:50:13.757533   22992 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 19:50:13.938181   22992 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 19:50:13.938258   22992 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 19:50:13.943664   22992 start.go:534] Will wait 60s for crictl version
	I0626 19:50:13.943729   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:13.947386   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 19:50:13.979982   22992 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 19:50:13.980056   22992 ssh_runner.go:195] Run: crio --version
	I0626 19:50:14.021138   22992 ssh_runner.go:195] Run: crio --version
	I0626 19:50:14.064807   22992 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0626 19:50:14.066632   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetIP
	I0626 19:50:14.069201   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:14.069570   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:14.069605   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:14.069805   22992 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 19:50:14.074257   22992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 19:50:14.086617   22992 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 19:50:14.086673   22992 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 19:50:14.118580   22992 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0626 19:50:14.118649   22992 ssh_runner.go:195] Run: which lz4
	I0626 19:50:14.122430   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0626 19:50:14.122513   22992 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 19:50:14.126663   22992 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 19:50:14.126695   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0626 19:50:16.008113   22992 crio.go:444] Took 1.885618 seconds to copy over tarball
	I0626 19:50:16.008177   22992 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 19:50:19.281801   22992 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.273597784s)
	I0626 19:50:19.281831   22992 crio.go:451] Took 3.273694 seconds to extract the tarball
	I0626 19:50:19.281847   22992 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 19:50:19.327456   22992 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 19:50:19.381141   22992 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0626 19:50:19.381168   22992 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 19:50:19.381207   22992 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 19:50:19.381261   22992 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0626 19:50:19.381280   22992 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0626 19:50:19.381303   22992 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0626 19:50:19.381355   22992 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0626 19:50:19.381266   22992 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 19:50:19.381470   22992 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0626 19:50:19.381475   22992 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0626 19:50:19.382514   22992 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0626 19:50:19.382526   22992 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0626 19:50:19.382515   22992 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 19:50:19.382516   22992 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0626 19:50:19.382567   22992 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0626 19:50:19.382522   22992 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0626 19:50:19.382522   22992 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0626 19:50:19.382545   22992 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 19:50:19.565993   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0626 19:50:19.578752   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0626 19:50:19.609195   22992 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0626 19:50:19.609246   22992 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0626 19:50:19.609298   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.637105   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0626 19:50:19.637203   22992 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0626 19:50:19.637246   22992 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0626 19:50:19.637290   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.669893   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0626 19:50:19.669926   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0626 19:50:19.670821   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0626 19:50:19.672654   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0626 19:50:19.674290   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0626 19:50:19.687760   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0626 19:50:19.702067   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 19:50:19.719012   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0626 19:50:19.778491   22992 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0626 19:50:19.778535   22992 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0626 19:50:19.778581   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.809187   22992 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0626 19:50:19.809234   22992 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0626 19:50:19.809287   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.809202   22992 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0626 19:50:19.809347   22992 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0626 19:50:19.809399   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.825999   22992 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0626 19:50:19.826030   22992 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0626 19:50:19.826046   22992 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0626 19:50:19.826059   22992 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 19:50:19.826100   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.826161   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0626 19:50:19.826176   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0626 19:50:19.826100   22992 ssh_runner.go:195] Run: which crictl
	I0626 19:50:19.826102   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0626 19:50:19.883452   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0626 19:50:19.891480   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0626 19:50:19.891564   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0626 19:50:19.891569   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 19:50:19.891607   22992 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0626 19:50:19.930801   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0626 19:50:19.930981   22992 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0626 19:50:20.241678   22992 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 19:50:20.382965   22992 cache_images.go:92] LoadImages completed in 1.001778442s
	W0626 19:50:20.383066   22992 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0626 19:50:20.383147   22992 ssh_runner.go:195] Run: crio config
	I0626 19:50:20.444311   22992 cni.go:84] Creating CNI manager for ""
	I0626 19:50:20.444338   22992 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:50:20.444349   22992 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 19:50:20.444370   22992 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-759751 NodeName:ingress-addon-legacy-759751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 19:50:20.444542   22992 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-759751"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 19:50:20.444611   22992 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-759751 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-759751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 19:50:20.444659   22992 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0626 19:50:20.457818   22992 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 19:50:20.457906   22992 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 19:50:20.468687   22992 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (434 bytes)
	I0626 19:50:20.486273   22992 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0626 19:50:20.503152   22992 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0626 19:50:20.520129   22992 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0626 19:50:20.524627   22992 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 19:50:20.536816   22992 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751 for IP: 192.168.39.7
	I0626 19:50:20.536852   22992 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.537001   22992 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 19:50:20.537071   22992 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 19:50:20.537132   22992 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.key
	I0626 19:50:20.537151   22992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt with IP's: []
	I0626 19:50:20.707351   22992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt ...
	I0626 19:50:20.707383   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: {Name:mk31c12927d140b10d29e2074bb63271df6e08e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.707584   22992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.key ...
	I0626 19:50:20.707600   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.key: {Name:mk0700d77fb36dc3d53c3fa53c41cec667323574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.707704   22992 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key.638b9eac
	I0626 19:50:20.707722   22992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt.638b9eac with IP's: [192.168.39.7 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 19:50:20.792882   22992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt.638b9eac ...
	I0626 19:50:20.792918   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt.638b9eac: {Name:mk39c1906c0e6d746aacfd66de53eac25982b683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.793111   22992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key.638b9eac ...
	I0626 19:50:20.793128   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key.638b9eac: {Name:mk09c497da1c553f2b7f9605280f07f297db13ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.793221   22992 certs.go:337] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt.638b9eac -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt
	I0626 19:50:20.793330   22992 certs.go:341] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key.638b9eac -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key
	I0626 19:50:20.793437   22992 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.key
	I0626 19:50:20.793459   22992 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.crt with IP's: []
	I0626 19:50:20.898911   22992 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.crt ...
	I0626 19:50:20.898943   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.crt: {Name:mk3e906bbdf6536c8012e55288c9a8c15f087909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.899130   22992 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.key ...
	I0626 19:50:20.899145   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.key: {Name:mkad7bc589f6145eae26cf772dfe7dfd5f35b512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:20.899263   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0626 19:50:20.899287   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0626 19:50:20.899303   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0626 19:50:20.899323   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0626 19:50:20.899356   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 19:50:20.899374   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 19:50:20.899390   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 19:50:20.899408   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 19:50:20.899486   22992 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 19:50:20.899534   22992 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 19:50:20.899550   22992 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 19:50:20.899590   22992 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 19:50:20.899626   22992 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 19:50:20.899661   22992 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 19:50:20.899719   22992 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 19:50:20.899766   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:50:20.899786   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem -> /usr/share/ca-certificates/14443.pem
	I0626 19:50:20.899804   22992 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /usr/share/ca-certificates/144432.pem
	I0626 19:50:20.900377   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 19:50:20.925426   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 19:50:20.949570   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 19:50:20.976215   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 19:50:20.999874   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 19:50:21.023834   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 19:50:21.047481   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 19:50:21.072063   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 19:50:21.096259   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 19:50:21.120592   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 19:50:21.144282   22992 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 19:50:21.168325   22992 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 19:50:21.184474   22992 ssh_runner.go:195] Run: openssl version
	I0626 19:50:21.190150   22992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 19:50:21.200839   22992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 19:50:21.205934   22992 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 19:50:21.206000   22992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 19:50:21.211807   22992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 19:50:21.222703   22992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 19:50:21.233904   22992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:50:21.238984   22992 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:50:21.239038   22992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:50:21.244655   22992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 19:50:21.255537   22992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 19:50:21.266215   22992 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 19:50:21.270581   22992 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 19:50:21.270632   22992 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 19:50:21.276463   22992 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 19:50:21.287167   22992 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 19:50:21.291547   22992 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 19:50:21.291609   22992 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-759751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-759751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:50:21.291707   22992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 19:50:21.291749   22992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 19:50:21.322828   22992 cri.go:89] found id: ""
	I0626 19:50:21.322902   22992 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 19:50:21.333030   22992 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 19:50:21.342573   22992 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 19:50:21.352159   22992 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 19:50:21.352205   22992 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0626 19:50:21.407511   22992 kubeadm.go:322] W0626 19:50:21.402210     969 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0626 19:50:21.524667   22992 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 19:50:24.324423   22992 kubeadm.go:322] W0626 19:50:24.320277     969 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0626 19:50:24.325565   22992 kubeadm.go:322] W0626 19:50:24.321523     969 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0626 19:50:34.320052   22992 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0626 19:50:34.320116   22992 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 19:50:34.320187   22992 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 19:50:34.320293   22992 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 19:50:34.320412   22992 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 19:50:34.320565   22992 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 19:50:34.320784   22992 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 19:50:34.320845   22992 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 19:50:34.320921   22992 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 19:50:34.322800   22992 out.go:204]   - Generating certificates and keys ...
	I0626 19:50:34.322895   22992 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 19:50:34.322978   22992 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 19:50:34.323084   22992 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 19:50:34.323167   22992 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 19:50:34.323245   22992 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 19:50:34.323338   22992 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 19:50:34.323413   22992 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 19:50:34.323578   22992 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-759751 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0626 19:50:34.323652   22992 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 19:50:34.323803   22992 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-759751 localhost] and IPs [192.168.39.7 127.0.0.1 ::1]
	I0626 19:50:34.323909   22992 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 19:50:34.324019   22992 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 19:50:34.324112   22992 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 19:50:34.324196   22992 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 19:50:34.324266   22992 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 19:50:34.324338   22992 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 19:50:34.324432   22992 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 19:50:34.324512   22992 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 19:50:34.324594   22992 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 19:50:34.326644   22992 out.go:204]   - Booting up control plane ...
	I0626 19:50:34.326754   22992 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 19:50:34.326874   22992 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 19:50:34.326973   22992 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 19:50:34.327098   22992 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 19:50:34.327277   22992 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 19:50:34.327357   22992 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503125 seconds
	I0626 19:50:34.327445   22992 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 19:50:34.327557   22992 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 19:50:34.327611   22992 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 19:50:34.327731   22992 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-759751 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 19:50:34.327783   22992 kubeadm.go:322] [bootstrap-token] Using token: nhuwwp.5slh54mjgiqcrhq4
	I0626 19:50:34.329446   22992 out.go:204]   - Configuring RBAC rules ...
	I0626 19:50:34.329530   22992 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 19:50:34.329597   22992 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 19:50:34.329718   22992 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 19:50:34.329835   22992 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 19:50:34.329937   22992 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 19:50:34.330017   22992 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 19:50:34.330111   22992 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 19:50:34.330150   22992 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 19:50:34.330195   22992 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 19:50:34.330199   22992 kubeadm.go:322] 
	I0626 19:50:34.330256   22992 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 19:50:34.330264   22992 kubeadm.go:322] 
	I0626 19:50:34.330341   22992 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 19:50:34.330348   22992 kubeadm.go:322] 
	I0626 19:50:34.330369   22992 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 19:50:34.330416   22992 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 19:50:34.330462   22992 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 19:50:34.330466   22992 kubeadm.go:322] 
	I0626 19:50:34.330506   22992 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 19:50:34.330583   22992 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 19:50:34.330638   22992 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 19:50:34.330644   22992 kubeadm.go:322] 
	I0626 19:50:34.330710   22992 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 19:50:34.330786   22992 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 19:50:34.330799   22992 kubeadm.go:322] 
	I0626 19:50:34.330874   22992 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nhuwwp.5slh54mjgiqcrhq4 \
	I0626 19:50:34.330968   22992 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 19:50:34.330994   22992 kubeadm.go:322]     --control-plane 
	I0626 19:50:34.330997   22992 kubeadm.go:322] 
	I0626 19:50:34.331063   22992 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 19:50:34.331074   22992 kubeadm.go:322] 
	I0626 19:50:34.331143   22992 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nhuwwp.5slh54mjgiqcrhq4 \
	I0626 19:50:34.331235   22992 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 19:50:34.331246   22992 cni.go:84] Creating CNI manager for ""
	I0626 19:50:34.331258   22992 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:50:34.332999   22992 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 19:50:34.334537   22992 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 19:50:34.343983   22992 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 19:50:34.362441   22992 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 19:50:34.362534   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:34.362576   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=ingress-addon-legacy-759751 minikube.k8s.io/updated_at=2023_06_26T19_50_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:34.383559   22992 ops.go:34] apiserver oom_adj: -16
	I0626 19:50:34.530892   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:35.206968   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:35.707298   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:36.207267   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:36.706398   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:37.207290   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:37.706369   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:38.206855   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:38.706505   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:39.206673   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:39.706573   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:40.206518   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:40.706367   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:41.206402   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:41.707305   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:42.206627   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:42.706771   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:43.207020   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:43.707359   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:44.206363   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:44.706912   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:45.207094   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:45.706815   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:46.207003   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:46.707218   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:47.206875   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:47.706390   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:48.206502   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:48.706630   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:49.206870   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:49.707170   22992 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 19:50:50.297852   22992 kubeadm.go:1081] duration metric: took 15.935386087s to wait for elevateKubeSystemPrivileges.
	I0626 19:50:50.297896   22992 kubeadm.go:406] StartCluster complete in 29.006294327s
	I0626 19:50:50.297920   22992 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:50.298001   22992 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:50:50.298761   22992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:50:50.299003   22992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 19:50:50.299089   22992 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 19:50:50.299169   22992 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-759751"
	I0626 19:50:50.299201   22992 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-759751"
	I0626 19:50:50.299226   22992 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-759751"
	I0626 19:50:50.299251   22992 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-759751"
	I0626 19:50:50.299285   22992 host.go:66] Checking if "ingress-addon-legacy-759751" exists ...
	I0626 19:50:50.299204   22992 config.go:182] Loaded profile config "ingress-addon-legacy-759751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0626 19:50:50.299595   22992 kapi.go:59] client config for ingress-addon-legacy-759751: &rest.Config{Host:"https://192.168.39.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 19:50:50.299745   22992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:50:50.299777   22992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:50:50.299886   22992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:50:50.299940   22992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:50:50.300545   22992 cert_rotation.go:137] Starting client certificate rotation controller
	I0626 19:50:50.314795   22992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0626 19:50:50.314864   22992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0626 19:50:50.315176   22992 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:50:50.315217   22992 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:50:50.315686   22992 main.go:141] libmachine: Using API Version  1
	I0626 19:50:50.315701   22992 main.go:141] libmachine: Using API Version  1
	I0626 19:50:50.315705   22992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:50:50.315718   22992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:50:50.316067   22992 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:50:50.316103   22992 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:50:50.316307   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetState
	I0626 19:50:50.316589   22992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:50:50.316616   22992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:50:50.318817   22992 kapi.go:59] client config for ingress-addon-legacy-759751: &rest.Config{Host:"https://192.168.39.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 19:50:50.331814   22992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0626 19:50:50.332275   22992 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:50:50.332825   22992 main.go:141] libmachine: Using API Version  1
	I0626 19:50:50.332853   22992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:50:50.333206   22992 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:50:50.333393   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetState
	I0626 19:50:50.334820   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:50.337129   22992 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 19:50:50.338870   22992 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 19:50:50.338890   22992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 19:50:50.338917   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:50.342263   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:50.342825   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:50.342856   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:50.343071   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:50.343260   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:50.343416   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:50.343546   22992 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa Username:docker}
	I0626 19:50:50.346841   22992 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-759751"
	I0626 19:50:50.346885   22992 host.go:66] Checking if "ingress-addon-legacy-759751" exists ...
	I0626 19:50:50.347269   22992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:50:50.347303   22992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:50:50.362224   22992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I0626 19:50:50.362674   22992 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:50:50.363137   22992 main.go:141] libmachine: Using API Version  1
	I0626 19:50:50.363162   22992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:50:50.363542   22992 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:50:50.364163   22992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:50:50.364202   22992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:50:50.378868   22992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0626 19:50:50.379276   22992 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:50:50.379833   22992 main.go:141] libmachine: Using API Version  1
	I0626 19:50:50.379863   22992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:50:50.380199   22992 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:50:50.380413   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetState
	I0626 19:50:50.382059   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .DriverName
	I0626 19:50:50.382315   22992 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 19:50:50.382333   22992 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 19:50:50.382362   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHHostname
	I0626 19:50:50.385080   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:50.385566   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:4e:ba", ip: ""} in network mk-ingress-addon-legacy-759751: {Iface:virbr1 ExpiryTime:2023-06-26 20:50:00 +0000 UTC Type:0 Mac:52:54:00:c8:4e:ba Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:ingress-addon-legacy-759751 Clientid:01:52:54:00:c8:4e:ba}
	I0626 19:50:50.385595   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | domain ingress-addon-legacy-759751 has defined IP address 192.168.39.7 and MAC address 52:54:00:c8:4e:ba in network mk-ingress-addon-legacy-759751
	I0626 19:50:50.385743   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHPort
	I0626 19:50:50.385850   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHKeyPath
	I0626 19:50:50.385999   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .GetSSHUsername
	I0626 19:50:50.386097   22992 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/ingress-addon-legacy-759751/id_rsa Username:docker}
	I0626 19:50:50.502264   22992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 19:50:50.559185   22992 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 19:50:50.649990   22992 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 19:50:51.134800   22992 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-759751" context rescaled to 1 replicas
	I0626 19:50:51.134843   22992 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 19:50:51.136939   22992 out.go:177] * Verifying Kubernetes components...
	I0626 19:50:51.138443   22992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 19:50:51.798988   22992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.296685903s)
	I0626 19:50:51.799038   22992 main.go:141] libmachine: Making call to close driver server
	I0626 19:50:51.799049   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .Close
	I0626 19:50:51.799056   22992 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.149022907s)
	I0626 19:50:51.799087   22992 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 19:50:51.798988   22992 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.239777646s)
	I0626 19:50:51.799138   22992 main.go:141] libmachine: Making call to close driver server
	I0626 19:50:51.799156   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .Close
	I0626 19:50:51.799313   22992 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:50:51.799330   22992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:50:51.799340   22992 main.go:141] libmachine: Making call to close driver server
	I0626 19:50:51.799350   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .Close
	I0626 19:50:51.799425   22992 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:50:51.799440   22992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:50:51.799449   22992 main.go:141] libmachine: Making call to close driver server
	I0626 19:50:51.799449   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Closing plugin on server side
	I0626 19:50:51.799457   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .Close
	I0626 19:50:51.799572   22992 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:50:51.799596   22992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:50:51.799973   22992 kapi.go:59] client config for ingress-addon-legacy-759751: &rest.Config{Host:"https://192.168.39.7:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 19:50:51.800250   22992 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-759751" to be "Ready" ...
	I0626 19:50:51.800577   22992 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:50:51.800605   22992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:50:51.800623   22992 main.go:141] libmachine: Making call to close driver server
	I0626 19:50:51.800631   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) Calling .Close
	I0626 19:50:51.800885   22992 main.go:141] libmachine: (ingress-addon-legacy-759751) DBG | Closing plugin on server side
	I0626 19:50:51.800908   22992 main.go:141] libmachine: Successfully made call to close driver server
	I0626 19:50:51.800926   22992 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 19:50:51.803305   22992 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0626 19:50:51.805123   22992 addons.go:499] enable addons completed in 1.506035717s: enabled=[storage-provisioner default-storageclass]
	I0626 19:50:51.833782   22992 node_ready.go:49] node "ingress-addon-legacy-759751" has status "Ready":"True"
	I0626 19:50:51.833805   22992 node_ready.go:38] duration metric: took 33.524617ms waiting for node "ingress-addon-legacy-759751" to be "Ready" ...
	I0626 19:50:51.833815   22992 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 19:50:51.840502   22992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4rf97" in "kube-system" namespace to be "Ready" ...
	I0626 19:50:53.854344   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:50:56.354610   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:50:58.854675   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:01.354331   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:03.853436   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:06.353336   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:08.353705   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:10.854829   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:13.353503   22992 pod_ready.go:102] pod "coredns-66bff467f8-4rf97" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:14.848664   22992 pod_ready.go:97] error getting pod "coredns-66bff467f8-4rf97" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-4rf97" not found
	I0626 19:51:14.848694   22992 pod_ready.go:81] duration metric: took 23.008166114s waiting for pod "coredns-66bff467f8-4rf97" in "kube-system" namespace to be "Ready" ...
	E0626 19:51:14.848703   22992 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-4rf97" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-4rf97" not found
	I0626 19:51:14.848709   22992 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:16.861171   22992 pod_ready.go:102] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:19.359776   22992 pod_ready.go:102] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:21.363357   22992 pod_ready.go:102] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:23.367568   22992 pod_ready.go:102] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:25.861315   22992 pod_ready.go:102] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:27.866817   22992 pod_ready.go:102] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"False"
	I0626 19:51:28.361215   22992 pod_ready.go:92] pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace has status "Ready":"True"
	I0626 19:51:28.361242   22992 pod_ready.go:81] duration metric: took 13.512526201s waiting for pod "coredns-66bff467f8-c5wmz" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.361254   22992 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.366330   22992 pod_ready.go:92] pod "etcd-ingress-addon-legacy-759751" in "kube-system" namespace has status "Ready":"True"
	I0626 19:51:28.366347   22992 pod_ready.go:81] duration metric: took 5.086357ms waiting for pod "etcd-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.366355   22992 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.371620   22992 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-759751" in "kube-system" namespace has status "Ready":"True"
	I0626 19:51:28.371635   22992 pod_ready.go:81] duration metric: took 5.274187ms waiting for pod "kube-apiserver-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.371643   22992 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.378067   22992 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-759751" in "kube-system" namespace has status "Ready":"True"
	I0626 19:51:28.378090   22992 pod_ready.go:81] duration metric: took 6.440578ms waiting for pod "kube-controller-manager-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.378103   22992 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47w6x" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.383224   22992 pod_ready.go:92] pod "kube-proxy-47w6x" in "kube-system" namespace has status "Ready":"True"
	I0626 19:51:28.383248   22992 pod_ready.go:81] duration metric: took 5.137622ms waiting for pod "kube-proxy-47w6x" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.383259   22992 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.554706   22992 request.go:628] Waited for 171.392001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-759751
	I0626 19:51:28.754429   22992 request.go:628] Waited for 196.377614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes/ingress-addon-legacy-759751
	I0626 19:51:28.758087   22992 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-759751" in "kube-system" namespace has status "Ready":"True"
	I0626 19:51:28.758110   22992 pod_ready.go:81] duration metric: took 374.843591ms waiting for pod "kube-scheduler-ingress-addon-legacy-759751" in "kube-system" namespace to be "Ready" ...
	I0626 19:51:28.758120   22992 pod_ready.go:38] duration metric: took 36.924292675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 19:51:28.758134   22992 api_server.go:52] waiting for apiserver process to appear ...
	I0626 19:51:28.758174   22992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 19:51:28.772600   22992 api_server.go:72] duration metric: took 37.637731684s to wait for apiserver process to appear ...
	I0626 19:51:28.772625   22992 api_server.go:88] waiting for apiserver healthz status ...
	I0626 19:51:28.772642   22992 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0626 19:51:28.779463   22992 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0626 19:51:28.780633   22992 api_server.go:141] control plane version: v1.18.20
	I0626 19:51:28.780654   22992 api_server.go:131] duration metric: took 8.023331ms to wait for apiserver health ...
	I0626 19:51:28.780662   22992 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 19:51:28.955147   22992 request.go:628] Waited for 174.430209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0626 19:51:28.961916   22992 system_pods.go:59] 7 kube-system pods found
	I0626 19:51:28.961943   22992 system_pods.go:61] "coredns-66bff467f8-c5wmz" [92ea4c36-dc23-4576-a16c-14053c5e8040] Running
	I0626 19:51:28.961948   22992 system_pods.go:61] "etcd-ingress-addon-legacy-759751" [13a6a685-c46e-438c-a8ac-adcbf270fd45] Running
	I0626 19:51:28.961952   22992 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-759751" [27f27249-19c7-4043-8c48-d8c0262fefe9] Running
	I0626 19:51:28.961956   22992 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-759751" [84890a2f-15c4-4a4c-9e33-05f87bd41cb5] Running
	I0626 19:51:28.961960   22992 system_pods.go:61] "kube-proxy-47w6x" [706df70a-65e2-47ed-9d36-6b8c6eb0d94e] Running
	I0626 19:51:28.961964   22992 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-759751" [7f1a471a-a30a-4ab9-ac97-d5f7403c57cc] Running
	I0626 19:51:28.961968   22992 system_pods.go:61] "storage-provisioner" [0e3384e0-e295-4914-90fe-9b6defe12171] Running
	I0626 19:51:28.961975   22992 system_pods.go:74] duration metric: took 181.307718ms to wait for pod list to return data ...
	I0626 19:51:28.961990   22992 default_sa.go:34] waiting for default service account to be created ...
	I0626 19:51:29.154440   22992 request.go:628] Waited for 192.379835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/default/serviceaccounts
	I0626 19:51:29.157777   22992 default_sa.go:45] found service account: "default"
	I0626 19:51:29.157800   22992 default_sa.go:55] duration metric: took 195.798182ms for default service account to be created ...
	I0626 19:51:29.157807   22992 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 19:51:29.354203   22992 request.go:628] Waited for 196.335549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/namespaces/kube-system/pods
	I0626 19:51:29.360572   22992 system_pods.go:86] 7 kube-system pods found
	I0626 19:51:29.360606   22992 system_pods.go:89] "coredns-66bff467f8-c5wmz" [92ea4c36-dc23-4576-a16c-14053c5e8040] Running
	I0626 19:51:29.360621   22992 system_pods.go:89] "etcd-ingress-addon-legacy-759751" [13a6a685-c46e-438c-a8ac-adcbf270fd45] Running
	I0626 19:51:29.360628   22992 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-759751" [27f27249-19c7-4043-8c48-d8c0262fefe9] Running
	I0626 19:51:29.360635   22992 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-759751" [84890a2f-15c4-4a4c-9e33-05f87bd41cb5] Running
	I0626 19:51:29.360641   22992 system_pods.go:89] "kube-proxy-47w6x" [706df70a-65e2-47ed-9d36-6b8c6eb0d94e] Running
	I0626 19:51:29.360647   22992 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-759751" [7f1a471a-a30a-4ab9-ac97-d5f7403c57cc] Running
	I0626 19:51:29.360653   22992 system_pods.go:89] "storage-provisioner" [0e3384e0-e295-4914-90fe-9b6defe12171] Running
	I0626 19:51:29.360661   22992 system_pods.go:126] duration metric: took 202.848706ms to wait for k8s-apps to be running ...
	I0626 19:51:29.360671   22992 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 19:51:29.360725   22992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 19:51:29.375340   22992 system_svc.go:56] duration metric: took 14.662327ms WaitForService to wait for kubelet.
	I0626 19:51:29.375367   22992 kubeadm.go:581] duration metric: took 38.240502424s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 19:51:29.375383   22992 node_conditions.go:102] verifying NodePressure condition ...
	I0626 19:51:29.554790   22992 request.go:628] Waited for 179.333625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.7:8443/api/v1/nodes
	I0626 19:51:29.558049   22992 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 19:51:29.558079   22992 node_conditions.go:123] node cpu capacity is 2
	I0626 19:51:29.558089   22992 node_conditions.go:105] duration metric: took 182.701565ms to run NodePressure ...
	I0626 19:51:29.558099   22992 start.go:228] waiting for startup goroutines ...
	I0626 19:51:29.558104   22992 start.go:233] waiting for cluster config update ...
	I0626 19:51:29.558112   22992 start.go:242] writing updated cluster config ...
	I0626 19:51:29.558376   22992 ssh_runner.go:195] Run: rm -f paused
	I0626 19:51:29.605546   22992 start.go:652] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0626 19:51:29.607668   22992 out.go:177] 
	W0626 19:51:29.609216   22992 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0626 19:51:29.610517   22992 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0626 19:51:29.611853   22992 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-759751" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 19:49:57 UTC, ends at Mon 2023-06-26 19:54:31 UTC. --
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.354609384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43377f7e-5c2a-44fd-b10a-579d9533ec37 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.377880900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96dec76d-b1d5-408b-9301-dac7caf843f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.377968985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96dec76d-b1d5-408b-9301-dac7caf843f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.378252261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96dec76d-b1d5-408b-9301-dac7caf843f6 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.411656213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=60956e76-1be5-495f-b6a4-90a1f5042abe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.411809441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=60956e76-1be5-495f-b6a4-90a1f5042abe name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.412157556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=60956e76-1be5-495f-b6a4-90a1f5042abe name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.449807182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=208cf195-c586-42f5-be6c-5c5510eb66e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.449939513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=208cf195-c586-42f5-be6c-5c5510eb66e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.450288874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=208cf195-c586-42f5-be6c-5c5510eb66e3 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.488391380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fe3f0f1-1614-46e9-8a78-80107249c57b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.488477993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fe3f0f1-1614-46e9-8a78-80107249c57b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.488845827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fe3f0f1-1614-46e9-8a78-80107249c57b name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.522297880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ab25ce89-8ed1-46d2-ba73-cd48c0a10535 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.522362302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ab25ce89-8ed1-46d2-ba73-cd48c0a10535 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.522612533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ab25ce89-8ed1-46d2-ba73-cd48c0a10535 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.556163189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c67ea22-a350-47d8-96b4-66b912fe644c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.556232912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c67ea22-a350-47d8-96b4-66b912fe644c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.556489514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c67ea22-a350-47d8-96b4-66b912fe644c name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.589879878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=90b26df4-f338-4cb4-9ec9-1f71a6a81478 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.589991308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=90b26df4-f338-4cb4-9ec9-1f71a6a81478 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.590250035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=90b26df4-f338-4cb4-9ec9-1f71a6a81478 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.619286740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ccd921f9-f372-4a8d-9e90-1784d9509ecf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.619386828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ccd921f9-f372-4a8d-9e90-1784d9509ecf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 19:54:31 ingress-addon-legacy-759751 crio[723]: time="2023-06-26 19:54:31.619672804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79d13d53bb2eff77c56c61570c3febe0ee8d388faec798aa3fade14741a6d952,PodSandboxId:f40cc42617aa6636f34c371ab1516a958c99a43932b113bf0155e89a9dbebe0b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1687809264313911710,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-smn5g,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 08b5fd19-acb1-4bb8-a60d-3471573759a3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f7d36c8,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c271403a7b2460dec447d4968b9c8d5782df1b69c15427e1a2ef1786fc14c15,PodSandboxId:084fd910ecd31fe7eeb7a5fa8db591589a1cd997177aae539946a1881b992fdf,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1687809124731661353,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e453c39-db4b-4ce0-bf5b-62570a4cdb5b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6bd6339c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a065ab05079e55edc48a2ebe123e1713f153225e0f933af4761d124e80dff5,PodSandboxId:2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1687809106626981846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-t5jmn,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 346376a5-3a80-45ae-b431-cf22260afe5a,},Annotations:map[string]string{io.kubernetes.container.hash: 942337c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:243d79e9cd140b545d0878635cb3fb661da73f5873e8b41a7d0350afca2f8d34,PodSandboxId:b89b03717e14ce3e07c43df1622bfb6a95a3f495d33cc92852e12e11abd9bab9,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809096181968297,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zcbpv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b43743ca-a616-4bb0-8af2-88365f57d7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 2aeb7ef9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0722220a1ae2ea44a1fbbf338158c8ecb878dca97b390747c3cd89817006f309,PodSandboxId:1c6589a02d03329ec585cd24951556b37b298dab11821726e655c753b0a1f436,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1687809095021108102,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sj6dj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 25dc6626-e3b0-4136-9d73-def95a263cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 48ab75d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809083339015294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e,PodSandboxId:2cb34a7fd5244a71cc46afa9b067e740f020187714d93aca4775b7ba1e0b838c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1687809052813969344,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47w6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 706df70a-65e2-47ed-9d36-6b8c6eb0d94e,},Annotations:map[string]string{io.kubernetes.container.hash: 7ba8fc76,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704,PodSandboxId:e25f4e969d0a1509ff974b62c32f8c7866eefbe9417ce480d5063f9934f1504a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687809052520745269,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e3384e0-e295-4914-90fe-9b6defe12171,},Annotations:map[string]string{io.kubernetes.container.hash: c32be236,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15,PodSandboxId:2ea22c8665e25bd49758b38e0ea5504ea84b746c832f029608e43c7c1e89144e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a7
54c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1687809051465356518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-c5wmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92ea4c36-dc23-4576-a16c-14053c5e8040,},Annotations:map[string]string{io.kubernetes.container.hash: bf8a01f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb,PodSand
boxId:57a5c26eddc315735e6ad17ebf64e510cb9c78875316093dcecc45fa60e1f257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1687809027520584939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d920ce23971dfe05b745be63e4819aec,},Annotations:map[string]string{io.kubernetes.container.hash: 4120caf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb,PodSandboxId:8120ce8ec6016c30ca2916b96138c1f3b9242a16
1be079da1fba288100758b1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1687809026536031408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c,PodSandboxId:468299f1ff8eabec0ff7f4646d68e2317611ab025f9af8
e3052db8235890fdd3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1687809026208725588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc7751c76f98f0471dff26f691aafcb,},Annotations:map[string]string{io.kubernetes.container.hash: 67aabc89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836,PodSandboxId:b72e28f28fd9205147e9d8cd2705b0824ec51599d8d7d3662ec8
92a2cdd4cdc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1687809026141636544,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-759751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ccd921f9-f372-4a8d-9e90-1784d9509ecf name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	79d13d53bb2ef       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            7 seconds ago       Running             hello-world-app           0                   f40cc42617aa6
	0c271403a7b24       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   084fd910ecd31
	07a065ab05079       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   2f3d93341e55a
	243d79e9cd140       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   b89b03717e14c
	0722220a1ae2e       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   1c6589a02d033
	4f6f3f36bbca9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   e25f4e969d0a1
	abeb58f433635       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   2cb34a7fd5244
	d02ccfac36593       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   e25f4e969d0a1
	9daadbe498f83       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   2ea22c8665e25
	fe02c5efc2944       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   57a5c26eddc31
	90127ff30895c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   8120ce8ec6016
	6366e8ced31a9       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   468299f1ff8ea
	edef46c7fcd6d       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   b72e28f28fd92
	
	* 
	* ==> coredns [9daadbe498f83e2f98eeee849e7096492687b21c860a20e2e82da9e1fe7b4e15] <==
	* [INFO] 10.244.0.6:32954 - 62413 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077106s
	[INFO] 10.244.0.6:32954 - 26733 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000217021s
	[INFO] 10.244.0.6:32954 - 17226 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083301s
	[INFO] 10.244.0.6:32954 - 43941 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104982s
	[INFO] 10.244.0.6:47953 - 2241 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000129421s
	[INFO] 10.244.0.6:47953 - 63898 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092801s
	[INFO] 10.244.0.6:47953 - 18437 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000104462s
	[INFO] 10.244.0.6:47953 - 29378 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070942s
	[INFO] 10.244.0.6:47953 - 52314 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045418s
	[INFO] 10.244.0.6:47953 - 50243 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039791s
	[INFO] 10.244.0.6:47953 - 19088 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077537s
	[INFO] 10.244.0.6:48000 - 44782 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000118934s
	[INFO] 10.244.0.6:38993 - 21968 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078181s
	[INFO] 10.244.0.6:38993 - 45116 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071868s
	[INFO] 10.244.0.6:48000 - 18367 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005486s
	[INFO] 10.244.0.6:38993 - 29674 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062007s
	[INFO] 10.244.0.6:48000 - 2993 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035425s
	[INFO] 10.244.0.6:38993 - 51168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054072s
	[INFO] 10.244.0.6:48000 - 63883 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034795s
	[INFO] 10.244.0.6:38993 - 21048 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070103s
	[INFO] 10.244.0.6:48000 - 7892 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046058s
	[INFO] 10.244.0.6:38993 - 57145 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026627s
	[INFO] 10.244.0.6:38993 - 47398 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060507s
	[INFO] 10.244.0.6:48000 - 5750 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036556s
	[INFO] 10.244.0.6:48000 - 34407 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047592s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-759751
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-759751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=ingress-addon-legacy-759751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T19_50_34_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 19:50:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-759751
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 19:54:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 19:52:34 +0000   Mon, 26 Jun 2023 19:50:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 19:52:34 +0000   Mon, 26 Jun 2023 19:50:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 19:52:34 +0000   Mon, 26 Jun 2023 19:50:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 19:52:34 +0000   Mon, 26 Jun 2023 19:50:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    ingress-addon-legacy-759751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 03863195d0654c16b437110178b4ff81
	  System UUID:                03863195-d065-4c16-b437-110178b4ff81
	  Boot ID:                    e5c0c366-3185-4401-9af6-dc474c263c56
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-smn5g                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-66bff467f8-c5wmz                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m41s
	  kube-system                 etcd-ingress-addon-legacy-759751                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-apiserver-ingress-addon-legacy-759751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-759751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-proxy-47w6x                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-scheduler-ingress-addon-legacy-759751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 4m7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x5 over 4m7s)  kubelet     Node ingress-addon-legacy-759751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-759751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-759751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m57s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s                kubelet     Node ingress-addon-legacy-759751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s                kubelet     Node ingress-addon-legacy-759751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s                kubelet     Node ingress-addon-legacy-759751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m47s                kubelet     Node ingress-addon-legacy-759751 status is now: NodeReady
	  Normal  Starting                 3m38s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun26 19:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.097076] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.154144] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.340149] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151964] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jun26 19:50] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.054721] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.103730] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.148030] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.107638] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.213167] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[  +8.060099] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
	[  +3.030933] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.210496] systemd-fstab-generator[1442]: Ignoring "noauto" for root device
	[ +16.901287] kauditd_printk_skb: 6 callbacks suppressed
	[Jun26 19:51] kauditd_printk_skb: 16 callbacks suppressed
	[ +35.149326] kauditd_printk_skb: 6 callbacks suppressed
	[ +23.715877] kauditd_printk_skb: 7 callbacks suppressed
	[Jun26 19:52] kauditd_printk_skb: 3 callbacks suppressed
	[Jun26 19:54] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [fe02c5efc2944ec52df33d352d7469a8864125db46e01ee50ade0590f64d1ceb] <==
	* 2023-06-26 19:50:27.743849 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-26 19:50:27.743920 I | embed: listening for peers on 192.168.39.7:2380
	raft2023/06/26 19:50:27 INFO: bb39151d8411994b is starting a new election at term 1
	raft2023/06/26 19:50:27 INFO: bb39151d8411994b became candidate at term 2
	raft2023/06/26 19:50:27 INFO: bb39151d8411994b received MsgVoteResp from bb39151d8411994b at term 2
	raft2023/06/26 19:50:27 INFO: bb39151d8411994b became leader at term 2
	raft2023/06/26 19:50:27 INFO: raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 2
	2023-06-26 19:50:27.928163 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-26 19:50:27.929746 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-26 19:50:27.929859 I | etcdserver: published {Name:ingress-addon-legacy-759751 ClientURLs:[https://192.168.39.7:2379]} to cluster 3202df3d6e5aadcb
	2023-06-26 19:50:27.929956 I | etcdserver/api: enabled capabilities for version 3.4
	2023-06-26 19:50:27.929994 I | embed: ready to serve client requests
	2023-06-26 19:50:27.930337 I | embed: ready to serve client requests
	2023-06-26 19:50:27.931266 I | embed: serving client requests on 192.168.39.7:2379
	2023-06-26 19:50:27.931554 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-26 19:50:50.280625 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (430.509071ms) to execute
	2023-06-26 19:50:50.704090 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-47w6x\" " with result "range_response_count:1 size:3588" took too long (108.717903ms) to execute
	2023-06-26 19:50:50.709121 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-public/default\" " with result "range_response_count:1 size:181" took too long (112.762379ms) to execute
	2023-06-26 19:50:50.709621 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/default\" " with result "range_response_count:1 size:181" took too long (112.958769ms) to execute
	2023-06-26 19:50:51.051119 W | etcdserver: read-only range request "key:\"/registry/daemonsets/kube-system/kube-proxy\" " with result "range_response_count:1 size:2927" took too long (360.904123ms) to execute
	2023-06-26 19:50:51.052474 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-node-lease/default\" " with result "range_response_count:1 size:189" took too long (362.170023ms) to execute
	2023-06-26 19:50:51.294683 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-4rf97\" " with result "range_response_count:1 size:3656" took too long (153.375276ms) to execute
	2023-06-26 19:50:51.392096 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-4rf97\" " with result "range_response_count:1 size:3656" took too long (250.606695ms) to execute
	2023-06-26 19:50:51.431658 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (100.393181ms) to execute
	2023-06-26 19:51:53.278120 W | etcdserver: request "header:<ID:11046073115190662375 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.7\" mod_revision:527 > success:<request_put:<key:\"/registry/masterleases/192.168.39.7\" value_size:67 lease:1822701078335886565 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.7\" > >>" with result "size:16" took too long (425.638309ms) to execute
	
	* 
	* ==> kernel <==
	*  19:54:31 up 4 min,  0 users,  load average: 1.23, 0.47, 0.18
	Linux ingress-addon-legacy-759751 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6366e8ced31a9bb0c0e73aed13c64d70db908fafa78ae922c43b34b4e386f10c] <==
	* I0626 19:50:31.193219       1 cache.go:39] Caches are synced for autoregister controller
	I0626 19:50:32.089927       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0626 19:50:32.090021       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0626 19:50:32.098129       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0626 19:50:32.107586       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0626 19:50:32.107623       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0626 19:50:32.558533       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0626 19:50:32.602448       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0626 19:50:32.723926       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.7]
	I0626 19:50:32.724863       1 controller.go:609] quota admission added evaluator for: endpoints
	I0626 19:50:32.729309       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0626 19:50:33.445177       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0626 19:50:34.191973       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0626 19:50:34.300516       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0626 19:50:34.576305       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0626 19:50:49.855236       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0626 19:50:50.095451       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0626 19:50:50.291478       1 trace.go:116] Trace[1225571759]: "GuaranteedUpdate etcd3" type:*certificates.CertificateSigningRequest (started: 2023-06-26 19:50:49.790446543 +0000 UTC m=+23.459711675) (total time: 500.889118ms):
	Trace[1225571759]: [500.874675ms] [495.979092ms] Transaction committed
	I0626 19:50:50.291807       1 trace.go:116] Trace[998804105]: "Update" url:/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/csr-jxbv8/approval,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:certificate-controller,client:192.168.39.7 (started: 2023-06-26 19:50:49.790293197 +0000 UTC m=+23.459558314) (total time: 501.439507ms):
	Trace[998804105]: [501.40751ms] [501.28314ms] Object stored in database
	I0626 19:51:30.226637       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0626 19:51:53.278880       1 trace.go:116] Trace[910876848]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2023-06-26 19:51:52.707888834 +0000 UTC m=+86.377153964) (total time: 570.963177ms):
	Trace[910876848]: [570.936757ms] [566.369592ms] Transaction committed
	I0626 19:51:59.332663       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [edef46c7fcd6dd0245e8796eb1be15d9e826aba1045181ea1b23c2ecaa058836] <==
	* I0626 19:50:50.318930       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"38d2cf8d-96e4-4021-8467-317ec4d02370", APIVersion:"apps/v1", ResourceVersion:"204", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0626 19:50:50.374237       1 shared_informer.go:230] Caches are synced for resource quota 
	I0626 19:50:50.410936       1 range_allocator.go:373] Set node ingress-addon-legacy-759751 PodCIDR to [10.244.0.0/24]
	E0626 19:50:50.419710       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0626 19:50:50.459566       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"9ec0da98-7f44-4187-9b70-e2649251e5e4", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-47w6x
	I0626 19:50:50.459737       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"63a3272b-4410-427a-9be3-4e1b8cc1aa02", APIVersion:"apps/v1", ResourceVersion:"318", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-4rf97
	I0626 19:50:50.558478       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"63a3272b-4410-427a-9be3-4e1b8cc1aa02", APIVersion:"apps/v1", ResourceVersion:"318", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-c5wmz
	I0626 19:50:50.559359       1 shared_informer.go:230] Caches are synced for resource quota 
	I0626 19:50:50.559494       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0626 19:50:50.559502       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0626 19:50:50.559566       1 shared_informer.go:230] Caches are synced for garbage collector 
	E0626 19:50:50.649470       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0626 19:50:51.079041       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"9ec0da98-7f44-4187-9b70-e2649251e5e4", ResourceVersion:"212", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63823405834, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00168c580), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc00168c5a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00168c5c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0014fb340), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc00168c5e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00168c600), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00168c640)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00116cfa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b831a8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009c68c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001094140)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000b831f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0626 19:50:51.097350       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"38d2cf8d-96e4-4021-8467-317ec4d02370", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0626 19:50:51.548909       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"63a3272b-4410-427a-9be3-4e1b8cc1aa02", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-4rf97
	I0626 19:51:30.205903       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a30f1f47-9b42-4542-a33f-4800d00744d7", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0626 19:51:30.220283       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"c84ea53f-0e76-44b1-a463-daa4b74fb059", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-t5jmn
	I0626 19:51:30.246393       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"596e722f-9858-4853-b331-12cc67027cd8", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-sj6dj
	I0626 19:51:30.368708       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"791a1ab0-7cb7-431a-b55f-419349c9db8e", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zcbpv
	I0626 19:51:35.375308       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"596e722f-9858-4853-b331-12cc67027cd8", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0626 19:51:36.385683       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"791a1ab0-7cb7-431a-b55f-419349c9db8e", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0626 19:54:20.816530       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9334373e-9d9b-4db1-acfe-6cf39febc3c2", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0626 19:54:20.830393       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"82927b0b-bc1f-4716-a41c-7baa4877c85f", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-smn5g
	E0626 19:54:28.835407       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-sq2rl" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [abeb58f4336350b7f69886414cc319137c537240747663faa3e598319a85326e] <==
	* W0626 19:50:52.996734       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0626 19:50:53.005078       1 node.go:136] Successfully retrieved node IP: 192.168.39.7
	I0626 19:50:53.005146       1 server_others.go:186] Using iptables Proxier.
	I0626 19:50:53.005449       1 server.go:583] Version: v1.18.20
	I0626 19:50:53.006964       1 config.go:315] Starting service config controller
	I0626 19:50:53.007008       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0626 19:50:53.010404       1 config.go:133] Starting endpoints config controller
	I0626 19:50:53.010454       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0626 19:50:53.107697       1 shared_informer.go:230] Caches are synced for service config 
	I0626 19:50:53.110651       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [90127ff30895c83ec7c63f6b60cf26664392d065b2b271c0b973be9299f104cb] <==
	* I0626 19:50:31.201238       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0626 19:50:31.201489       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0626 19:50:31.203527       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0626 19:50:31.203620       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0626 19:50:31.203627       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0626 19:50:31.203641       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0626 19:50:31.210937       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 19:50:31.216540       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 19:50:31.214194       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 19:50:31.216507       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 19:50:31.217020       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 19:50:31.217164       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 19:50:31.217181       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 19:50:31.217481       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 19:50:31.217636       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 19:50:31.217844       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 19:50:31.217860       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 19:50:31.218838       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 19:50:32.054651       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 19:50:32.102819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 19:50:32.302374       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 19:50:32.406099       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 19:50:32.479429       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0626 19:50:34.403885       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0626 19:50:50.433141       1 factory.go:503] pod: kube-system/coredns-66bff467f8-4rf97 is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 19:49:57 UTC, ends at Mon 2023-06-26 19:54:32 UTC. --
	Jun 26 19:51:37 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:51:37.473515    1449 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43743ca-a616-4bb0-8af2-88365f57d7bd-ingress-nginx-admission-token-mhrj8" (OuterVolumeSpecName: "ingress-nginx-admission-token-mhrj8") pod "b43743ca-a616-4bb0-8af2-88365f57d7bd" (UID: "b43743ca-a616-4bb0-8af2-88365f57d7bd"). InnerVolumeSpecName "ingress-nginx-admission-token-mhrj8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 19:51:37 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:51:37.569570    1449 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-mhrj8" (UniqueName: "kubernetes.io/secret/b43743ca-a616-4bb0-8af2-88365f57d7bd-ingress-nginx-admission-token-mhrj8") on node "ingress-addon-legacy-759751" DevicePath ""
	Jun 26 19:51:48 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:51:48.315001    1449 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jun 26 19:51:48 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:51:48.412210    1449 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-c6tcv" (UniqueName: "kubernetes.io/secret/801791f0-5fd1-4cd6-95a1-62a11e9afc6f-minikube-ingress-dns-token-c6tcv") pod "kube-ingress-dns-minikube" (UID: "801791f0-5fd1-4cd6-95a1-62a11e9afc6f")
	Jun 26 19:51:59 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:51:59.510366    1449 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jun 26 19:51:59 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:51:59.649945    1449 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dqmnm" (UniqueName: "kubernetes.io/secret/9e453c39-db4b-4ce0-bf5b-62570a4cdb5b-default-token-dqmnm") pod "nginx" (UID: "9e453c39-db4b-4ce0-bf5b-62570a4cdb5b")
	Jun 26 19:54:20 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:20.841206    1449 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jun 26 19:54:20 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:20.928712    1449 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dqmnm" (UniqueName: "kubernetes.io/secret/08b5fd19-acb1-4bb8-a60d-3471573759a3-default-token-dqmnm") pod "hello-world-app-5f5d8b66bb-smn5g" (UID: "08b5fd19-acb1-4bb8-a60d-3471573759a3")
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:22.273364    1449 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:22.311199    1449 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: E0626 19:54:22.311825    1449 remote_runtime.go:295] ContainerStatus "574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda" from runtime service failed: rpc error: code = NotFound desc = could not find container "574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda": container with ID starting with 574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda not found: ID does not exist
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:22.333932    1449 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-c6tcv" (UniqueName: "kubernetes.io/secret/801791f0-5fd1-4cd6-95a1-62a11e9afc6f-minikube-ingress-dns-token-c6tcv") pod "801791f0-5fd1-4cd6-95a1-62a11e9afc6f" (UID: "801791f0-5fd1-4cd6-95a1-62a11e9afc6f")
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:22.345146    1449 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801791f0-5fd1-4cd6-95a1-62a11e9afc6f-minikube-ingress-dns-token-c6tcv" (OuterVolumeSpecName: "minikube-ingress-dns-token-c6tcv") pod "801791f0-5fd1-4cd6-95a1-62a11e9afc6f" (UID: "801791f0-5fd1-4cd6-95a1-62a11e9afc6f"). InnerVolumeSpecName "minikube-ingress-dns-token-c6tcv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:22.434283    1449 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-c6tcv" (UniqueName: "kubernetes.io/secret/801791f0-5fd1-4cd6-95a1-62a11e9afc6f-minikube-ingress-dns-token-c6tcv") on node "ingress-addon-legacy-759751" DevicePath ""
	Jun 26 19:54:22 ingress-addon-legacy-759751 kubelet[1449]: E0626 19:54:22.680427    1449 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda\": container with ID starting with 574311aba5f920ac3c1f259faf1a214fbf15af912c55c78d1fc58f330ba53fda not found: ID does not exist"
	Jun 26 19:54:24 ingress-addon-legacy-759751 kubelet[1449]: E0626 19:54:24.577896    1449 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-t5jmn.176c4d956c5aadcb", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-t5jmn", UID:"346376a5-3a80-45ae-b431-cf22260afe5a", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-759751"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11e991c1f054dcb, ext:230398931455, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11e991c1f054dcb, ext:230398931455, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-t5jmn.176c4d956c5aadcb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 26 19:54:24 ingress-addon-legacy-759751 kubelet[1449]: E0626 19:54:24.591857    1449 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-t5jmn.176c4d956c5aadcb", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-t5jmn", UID:"346376a5-3a80-45ae-b431-cf22260afe5a", APIVersion:"v1", ResourceVersion:"478", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-759751"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11e991c1f054dcb, ext:230398931455, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11e991c22df6f13, ext:230463558479, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-t5jmn.176c4d956c5aadcb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 26 19:54:27 ingress-addon-legacy-759751 kubelet[1449]: W0626 19:54:27.342858    1449 pod_container_deletor.go:77] Container "2f3d93341e55aebf05043184eecb8f4302003beaf1204ea011ff1cdfcfa08f61" not found in pod's containers
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:28.661581    1449 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/346376a5-3a80-45ae-b431-cf22260afe5a-webhook-cert") pod "346376a5-3a80-45ae-b431-cf22260afe5a" (UID: "346376a5-3a80-45ae-b431-cf22260afe5a")
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:28.661628    1449 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-f56bn" (UniqueName: "kubernetes.io/secret/346376a5-3a80-45ae-b431-cf22260afe5a-ingress-nginx-token-f56bn") pod "346376a5-3a80-45ae-b431-cf22260afe5a" (UID: "346376a5-3a80-45ae-b431-cf22260afe5a")
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:28.666981    1449 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346376a5-3a80-45ae-b431-cf22260afe5a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "346376a5-3a80-45ae-b431-cf22260afe5a" (UID: "346376a5-3a80-45ae-b431-cf22260afe5a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:28.667210    1449 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346376a5-3a80-45ae-b431-cf22260afe5a-ingress-nginx-token-f56bn" (OuterVolumeSpecName: "ingress-nginx-token-f56bn") pod "346376a5-3a80-45ae-b431-cf22260afe5a" (UID: "346376a5-3a80-45ae-b431-cf22260afe5a"). InnerVolumeSpecName "ingress-nginx-token-f56bn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: W0626 19:54:28.690174    1449 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/346376a5-3a80-45ae-b431-cf22260afe5a/volumes" does not exist
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:28.762062    1449 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/346376a5-3a80-45ae-b431-cf22260afe5a-webhook-cert") on node "ingress-addon-legacy-759751" DevicePath ""
	Jun 26 19:54:28 ingress-addon-legacy-759751 kubelet[1449]: I0626 19:54:28.762097    1449 reconciler.go:319] Volume detached for volume "ingress-nginx-token-f56bn" (UniqueName: "kubernetes.io/secret/346376a5-3a80-45ae-b431-cf22260afe5a-ingress-nginx-token-f56bn") on node "ingress-addon-legacy-759751" DevicePath ""
	
	* 
	* ==> storage-provisioner [4f6f3f36bbca9ec5b1b0819ea005164c9018b00e7c134aeb1d70e888558815c0] <==
	* I0626 19:51:23.473267       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 19:51:23.483273       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 19:51:23.483357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 19:51:23.494441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 19:51:23.494874       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-759751_d2fa52b0-b3b5-4baa-8682-a46668aa0044!
	I0626 19:51:23.498564       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d19d9c33-6d12-43d1-a9cd-4a5fe218f901", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-759751_d2fa52b0-b3b5-4baa-8682-a46668aa0044 became leader
	I0626 19:51:23.595966       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-759751_d2fa52b0-b3b5-4baa-8682-a46668aa0044!
	
	* 
	* ==> storage-provisioner [d02ccfac365933f9d7d7dfedb663857aed3cc5c9ad15a5d7af38c9056cabf704] <==
	* I0626 19:50:52.658540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0626 19:51:22.661610       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-759751 -n ingress-addon-legacy-759751
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-759751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (164.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-xw4h2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-xw4h2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-xw4h2 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (179.427982ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-xw4h2): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-z697w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-z697w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-z697w -- sh -c "ping -c 1 192.168.39.1": exit status 1 (170.555555ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-z697w): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-050558 -n multinode-050558
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-050558 logs -n 25: (1.243251339s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-102170 ssh -- ls                    | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:58 UTC | 26 Jun 23 19:58 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-102170 ssh --                       | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:58 UTC | 26 Jun 23 19:58 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-102170                           | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:58 UTC | 26 Jun 23 19:58 UTC |
	| start   | -p mount-start-2-102170                           | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:58 UTC | 26 Jun 23 19:59 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:59 UTC |                     |
	|         | --profile mount-start-2-102170                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-102170 ssh -- ls                    | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:59 UTC | 26 Jun 23 19:59 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-102170 ssh --                       | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:59 UTC | 26 Jun 23 19:59 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-102170                           | mount-start-2-102170 | jenkins | v1.30.1 | 26 Jun 23 19:59 UTC | 26 Jun 23 19:59 UTC |
	| delete  | -p mount-start-1-084753                           | mount-start-1-084753 | jenkins | v1.30.1 | 26 Jun 23 19:59 UTC | 26 Jun 23 19:59 UTC |
	| start   | -p multinode-050558                               | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 19:59 UTC | 26 Jun 23 20:01 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- apply -f                   | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- rollout                    | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- get pods -o                | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- get pods -o                | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-xw4h2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-z697w --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-xw4h2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-z697w --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-xw4h2 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-z697w -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- get pods -o                | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-xw4h2                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC |                     |
	|         | busybox-67b7f59bb-xw4h2 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC | 26 Jun 23 20:01 UTC |
	|         | busybox-67b7f59bb-z697w                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-050558 -- exec                       | multinode-050558     | jenkins | v1.30.1 | 26 Jun 23 20:01 UTC |                     |
	|         | busybox-67b7f59bb-z697w -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 19:59:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 19:59:22.570870   27145 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:59:22.571017   27145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:59:22.571027   27145 out.go:309] Setting ErrFile to fd 2...
	I0626 19:59:22.571031   27145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:59:22.571169   27145 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 19:59:22.571752   27145 out.go:303] Setting JSON to false
	I0626 19:59:22.572625   27145 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2510,"bootTime":1687807053,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:59:22.572684   27145 start.go:137] virtualization: kvm guest
	I0626 19:59:22.574804   27145 out.go:177] * [multinode-050558] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:59:22.576771   27145 notify.go:220] Checking for updates...
	I0626 19:59:22.576773   27145 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 19:59:22.578432   27145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:59:22.580203   27145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:59:22.581781   27145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:59:22.583246   27145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 19:59:22.584641   27145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 19:59:22.586181   27145 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:59:22.618968   27145 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 19:59:22.620255   27145 start.go:297] selected driver: kvm2
	I0626 19:59:22.620265   27145 start.go:954] validating driver "kvm2" against <nil>
	I0626 19:59:22.620274   27145 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 19:59:22.620877   27145 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:59:22.620941   27145 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 19:59:22.634501   27145 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 19:59:22.634550   27145 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 19:59:22.634767   27145 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 19:59:22.634807   27145 cni.go:84] Creating CNI manager for ""
	I0626 19:59:22.634818   27145 cni.go:137] 0 nodes found, recommending kindnet
	I0626 19:59:22.634825   27145 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0626 19:59:22.634837   27145 start_flags.go:319] config:
	{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:59:22.634993   27145 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:59:22.637058   27145 out.go:177] * Starting control plane node multinode-050558 in cluster multinode-050558
	I0626 19:59:22.638558   27145 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:59:22.638598   27145 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 19:59:22.638617   27145 cache.go:57] Caching tarball of preloaded images
	I0626 19:59:22.638706   27145 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 19:59:22.638719   27145 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 19:59:22.639036   27145 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 19:59:22.639061   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json: {Name:mk56baa43959123b049ccaee83df2b869f4e311d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:22.639203   27145 start.go:365] acquiring machines lock for multinode-050558: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 19:59:22.639238   27145 start.go:369] acquired machines lock for "multinode-050558" in 20.142µs
	I0626 19:59:22.639260   27145 start.go:93] Provisioning new machine with config: &{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 19:59:22.639349   27145 start.go:125] createHost starting for "" (driver="kvm2")
	I0626 19:59:22.641090   27145 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0626 19:59:22.641228   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:59:22.641260   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:59:22.654492   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0626 19:59:22.654845   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:59:22.655378   27145 main.go:141] libmachine: Using API Version  1
	I0626 19:59:22.655402   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:59:22.655738   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:59:22.655915   27145 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 19:59:22.656076   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:22.656213   27145 start.go:159] libmachine.API.Create for "multinode-050558" (driver="kvm2")
	I0626 19:59:22.656251   27145 client.go:168] LocalClient.Create starting
	I0626 19:59:22.656286   27145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem
	I0626 19:59:22.656322   27145 main.go:141] libmachine: Decoding PEM data...
	I0626 19:59:22.656344   27145 main.go:141] libmachine: Parsing certificate...
	I0626 19:59:22.656408   27145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem
	I0626 19:59:22.656447   27145 main.go:141] libmachine: Decoding PEM data...
	I0626 19:59:22.656481   27145 main.go:141] libmachine: Parsing certificate...
	I0626 19:59:22.656512   27145 main.go:141] libmachine: Running pre-create checks...
	I0626 19:59:22.656526   27145 main.go:141] libmachine: (multinode-050558) Calling .PreCreateCheck
	I0626 19:59:22.656836   27145 main.go:141] libmachine: (multinode-050558) Calling .GetConfigRaw
	I0626 19:59:22.657169   27145 main.go:141] libmachine: Creating machine...
	I0626 19:59:22.657185   27145 main.go:141] libmachine: (multinode-050558) Calling .Create
	I0626 19:59:22.657305   27145 main.go:141] libmachine: (multinode-050558) Creating KVM machine...
	I0626 19:59:22.658314   27145 main.go:141] libmachine: (multinode-050558) DBG | found existing default KVM network
	I0626 19:59:22.658914   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:22.658787   27169 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298a0}
	I0626 19:59:22.663819   27145 main.go:141] libmachine: (multinode-050558) DBG | trying to create private KVM network mk-multinode-050558 192.168.39.0/24...
	I0626 19:59:22.731505   27145 main.go:141] libmachine: (multinode-050558) Setting up store path in /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558 ...
	I0626 19:59:22.731538   27145 main.go:141] libmachine: (multinode-050558) Building disk image from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 19:59:22.731551   27145 main.go:141] libmachine: (multinode-050558) DBG | private KVM network mk-multinode-050558 192.168.39.0/24 created
	I0626 19:59:22.731569   27145 main.go:141] libmachine: (multinode-050558) Downloading /home/jenkins/minikube-integration/16761-7242/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso...
	I0626 19:59:22.731585   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:22.731448   27169 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:59:22.928052   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:22.927899   27169 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa...
	I0626 19:59:23.324870   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:23.324741   27169 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/multinode-050558.rawdisk...
	I0626 19:59:23.324905   27145 main.go:141] libmachine: (multinode-050558) DBG | Writing magic tar header
	I0626 19:59:23.324920   27145 main.go:141] libmachine: (multinode-050558) DBG | Writing SSH key tar header
	I0626 19:59:23.324929   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:23.324870   27169 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558 ...
	I0626 19:59:23.325022   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558
	I0626 19:59:23.325058   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines
	I0626 19:59:23.325074   27145 main.go:141] libmachine: (multinode-050558) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558 (perms=drwx------)
	I0626 19:59:23.325084   27145 main.go:141] libmachine: (multinode-050558) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines (perms=drwxr-xr-x)
	I0626 19:59:23.325091   27145 main.go:141] libmachine: (multinode-050558) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube (perms=drwxr-xr-x)
	I0626 19:59:23.325101   27145 main.go:141] libmachine: (multinode-050558) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242 (perms=drwxrwxr-x)
	I0626 19:59:23.325115   27145 main.go:141] libmachine: (multinode-050558) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0626 19:59:23.325131   27145 main.go:141] libmachine: (multinode-050558) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0626 19:59:23.325153   27145 main.go:141] libmachine: (multinode-050558) Creating domain...
	I0626 19:59:23.325164   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:59:23.325177   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242
	I0626 19:59:23.325185   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0626 19:59:23.325193   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home/jenkins
	I0626 19:59:23.325200   27145 main.go:141] libmachine: (multinode-050558) DBG | Checking permissions on dir: /home
	I0626 19:59:23.325228   27145 main.go:141] libmachine: (multinode-050558) DBG | Skipping /home - not owner
	I0626 19:59:23.326144   27145 main.go:141] libmachine: (multinode-050558) define libvirt domain using xml: 
	I0626 19:59:23.326168   27145 main.go:141] libmachine: (multinode-050558) <domain type='kvm'>
	I0626 19:59:23.326180   27145 main.go:141] libmachine: (multinode-050558)   <name>multinode-050558</name>
	I0626 19:59:23.326194   27145 main.go:141] libmachine: (multinode-050558)   <memory unit='MiB'>2200</memory>
	I0626 19:59:23.326225   27145 main.go:141] libmachine: (multinode-050558)   <vcpu>2</vcpu>
	I0626 19:59:23.326248   27145 main.go:141] libmachine: (multinode-050558)   <features>
	I0626 19:59:23.326259   27145 main.go:141] libmachine: (multinode-050558)     <acpi/>
	I0626 19:59:23.326270   27145 main.go:141] libmachine: (multinode-050558)     <apic/>
	I0626 19:59:23.326284   27145 main.go:141] libmachine: (multinode-050558)     <pae/>
	I0626 19:59:23.326296   27145 main.go:141] libmachine: (multinode-050558)     
	I0626 19:59:23.326314   27145 main.go:141] libmachine: (multinode-050558)   </features>
	I0626 19:59:23.326329   27145 main.go:141] libmachine: (multinode-050558)   <cpu mode='host-passthrough'>
	I0626 19:59:23.326355   27145 main.go:141] libmachine: (multinode-050558)   
	I0626 19:59:23.326379   27145 main.go:141] libmachine: (multinode-050558)   </cpu>
	I0626 19:59:23.326391   27145 main.go:141] libmachine: (multinode-050558)   <os>
	I0626 19:59:23.326421   27145 main.go:141] libmachine: (multinode-050558)     <type>hvm</type>
	I0626 19:59:23.326437   27145 main.go:141] libmachine: (multinode-050558)     <boot dev='cdrom'/>
	I0626 19:59:23.326449   27145 main.go:141] libmachine: (multinode-050558)     <boot dev='hd'/>
	I0626 19:59:23.326472   27145 main.go:141] libmachine: (multinode-050558)     <bootmenu enable='no'/>
	I0626 19:59:23.326501   27145 main.go:141] libmachine: (multinode-050558)   </os>
	I0626 19:59:23.326515   27145 main.go:141] libmachine: (multinode-050558)   <devices>
	I0626 19:59:23.326528   27145 main.go:141] libmachine: (multinode-050558)     <disk type='file' device='cdrom'>
	I0626 19:59:23.326545   27145 main.go:141] libmachine: (multinode-050558)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/boot2docker.iso'/>
	I0626 19:59:23.326558   27145 main.go:141] libmachine: (multinode-050558)       <target dev='hdc' bus='scsi'/>
	I0626 19:59:23.326571   27145 main.go:141] libmachine: (multinode-050558)       <readonly/>
	I0626 19:59:23.326586   27145 main.go:141] libmachine: (multinode-050558)     </disk>
	I0626 19:59:23.326604   27145 main.go:141] libmachine: (multinode-050558)     <disk type='file' device='disk'>
	I0626 19:59:23.326626   27145 main.go:141] libmachine: (multinode-050558)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0626 19:59:23.326645   27145 main.go:141] libmachine: (multinode-050558)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/multinode-050558.rawdisk'/>
	I0626 19:59:23.326658   27145 main.go:141] libmachine: (multinode-050558)       <target dev='hda' bus='virtio'/>
	I0626 19:59:23.326673   27145 main.go:141] libmachine: (multinode-050558)     </disk>
	I0626 19:59:23.326685   27145 main.go:141] libmachine: (multinode-050558)     <interface type='network'>
	I0626 19:59:23.326704   27145 main.go:141] libmachine: (multinode-050558)       <source network='mk-multinode-050558'/>
	I0626 19:59:23.326718   27145 main.go:141] libmachine: (multinode-050558)       <model type='virtio'/>
	I0626 19:59:23.326730   27145 main.go:141] libmachine: (multinode-050558)     </interface>
	I0626 19:59:23.326744   27145 main.go:141] libmachine: (multinode-050558)     <interface type='network'>
	I0626 19:59:23.326757   27145 main.go:141] libmachine: (multinode-050558)       <source network='default'/>
	I0626 19:59:23.326771   27145 main.go:141] libmachine: (multinode-050558)       <model type='virtio'/>
	I0626 19:59:23.326787   27145 main.go:141] libmachine: (multinode-050558)     </interface>
	I0626 19:59:23.326800   27145 main.go:141] libmachine: (multinode-050558)     <serial type='pty'>
	I0626 19:59:23.326812   27145 main.go:141] libmachine: (multinode-050558)       <target port='0'/>
	I0626 19:59:23.326824   27145 main.go:141] libmachine: (multinode-050558)     </serial>
	I0626 19:59:23.326833   27145 main.go:141] libmachine: (multinode-050558)     <console type='pty'>
	I0626 19:59:23.326843   27145 main.go:141] libmachine: (multinode-050558)       <target type='serial' port='0'/>
	I0626 19:59:23.326857   27145 main.go:141] libmachine: (multinode-050558)     </console>
	I0626 19:59:23.326870   27145 main.go:141] libmachine: (multinode-050558)     <rng model='virtio'>
	I0626 19:59:23.326888   27145 main.go:141] libmachine: (multinode-050558)       <backend model='random'>/dev/random</backend>
	I0626 19:59:23.326901   27145 main.go:141] libmachine: (multinode-050558)     </rng>
	I0626 19:59:23.326911   27145 main.go:141] libmachine: (multinode-050558)     
	I0626 19:59:23.326937   27145 main.go:141] libmachine: (multinode-050558)     
	I0626 19:59:23.326958   27145 main.go:141] libmachine: (multinode-050558)   </devices>
	I0626 19:59:23.326977   27145 main.go:141] libmachine: (multinode-050558) </domain>
	I0626 19:59:23.326994   27145 main.go:141] libmachine: (multinode-050558) 
	I0626 19:59:23.331097   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:d2:29:56 in network default
	I0626 19:59:23.331639   27145 main.go:141] libmachine: (multinode-050558) Ensuring networks are active...
	I0626 19:59:23.331662   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:23.332244   27145 main.go:141] libmachine: (multinode-050558) Ensuring network default is active
	I0626 19:59:23.332498   27145 main.go:141] libmachine: (multinode-050558) Ensuring network mk-multinode-050558 is active
	I0626 19:59:23.332914   27145 main.go:141] libmachine: (multinode-050558) Getting domain xml...
	I0626 19:59:23.333616   27145 main.go:141] libmachine: (multinode-050558) Creating domain...
	I0626 19:59:24.526460   27145 main.go:141] libmachine: (multinode-050558) Waiting to get IP...
	I0626 19:59:24.527203   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:24.527680   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:24.527742   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:24.527653   27169 retry.go:31] will retry after 306.06985ms: waiting for machine to come up
	I0626 19:59:24.834862   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:24.835312   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:24.835340   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:24.835246   27169 retry.go:31] will retry after 321.435991ms: waiting for machine to come up
	I0626 19:59:25.158725   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:25.159122   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:25.159150   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:25.159079   27169 retry.go:31] will retry after 398.678309ms: waiting for machine to come up
	I0626 19:59:25.559599   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:25.560130   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:25.560164   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:25.560047   27169 retry.go:31] will retry after 563.594978ms: waiting for machine to come up
	I0626 19:59:26.124666   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:26.125109   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:26.125140   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:26.125041   27169 retry.go:31] will retry after 499.62916ms: waiting for machine to come up
	I0626 19:59:26.626819   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:26.627253   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:26.627282   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:26.627210   27169 retry.go:31] will retry after 807.704608ms: waiting for machine to come up
	I0626 19:59:27.436150   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:27.436509   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:27.436539   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:27.436453   27169 retry.go:31] will retry after 910.536777ms: waiting for machine to come up
	I0626 19:59:28.348999   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:28.349343   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:28.349366   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:28.349300   27169 retry.go:31] will retry after 918.382427ms: waiting for machine to come up
	I0626 19:59:29.269263   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:29.269651   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:29.269679   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:29.269597   27169 retry.go:31] will retry after 1.658927628s: waiting for machine to come up
	I0626 19:59:30.930279   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:30.930649   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:30.930677   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:30.930607   27169 retry.go:31] will retry after 1.447627048s: waiting for machine to come up
	I0626 19:59:32.380038   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:32.380569   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:32.380600   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:32.380491   27169 retry.go:31] will retry after 2.020045182s: waiting for machine to come up
	I0626 19:59:34.402348   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:34.402746   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:34.402775   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:34.402709   27169 retry.go:31] will retry after 2.615025547s: waiting for machine to come up
	I0626 19:59:37.020391   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:37.020762   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:37.020793   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:37.020684   27169 retry.go:31] will retry after 3.103712866s: waiting for machine to come up
	I0626 19:59:40.125540   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:40.125870   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 19:59:40.125894   27145 main.go:141] libmachine: (multinode-050558) DBG | I0626 19:59:40.125850   27169 retry.go:31] will retry after 3.432291311s: waiting for machine to come up
	I0626 19:59:43.559942   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.560409   27145 main.go:141] libmachine: (multinode-050558) Found IP for machine: 192.168.39.229
	I0626 19:59:43.560438   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has current primary IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.560449   27145 main.go:141] libmachine: (multinode-050558) Reserving static IP address...
	I0626 19:59:43.560866   27145 main.go:141] libmachine: (multinode-050558) DBG | unable to find host DHCP lease matching {name: "multinode-050558", mac: "52:54:00:b7:21:4e", ip: "192.168.39.229"} in network mk-multinode-050558
	I0626 19:59:43.630034   27145 main.go:141] libmachine: (multinode-050558) DBG | Getting to WaitForSSH function...
	I0626 19:59:43.630069   27145 main.go:141] libmachine: (multinode-050558) Reserved static IP address: 192.168.39.229
	I0626 19:59:43.630083   27145 main.go:141] libmachine: (multinode-050558) Waiting for SSH to be available...
	I0626 19:59:43.632401   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.632762   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:43.632805   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.632874   27145 main.go:141] libmachine: (multinode-050558) DBG | Using SSH client type: external
	I0626 19:59:43.632901   27145 main.go:141] libmachine: (multinode-050558) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa (-rw-------)
	I0626 19:59:43.632971   27145 main.go:141] libmachine: (multinode-050558) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 19:59:43.632992   27145 main.go:141] libmachine: (multinode-050558) DBG | About to run SSH command:
	I0626 19:59:43.633006   27145 main.go:141] libmachine: (multinode-050558) DBG | exit 0
	I0626 19:59:43.729181   27145 main.go:141] libmachine: (multinode-050558) DBG | SSH cmd err, output: <nil>: 
	I0626 19:59:43.729446   27145 main.go:141] libmachine: (multinode-050558) KVM machine creation complete!
	I0626 19:59:43.729731   27145 main.go:141] libmachine: (multinode-050558) Calling .GetConfigRaw
	I0626 19:59:43.730317   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:43.730506   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:43.730684   27145 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0626 19:59:43.730700   27145 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 19:59:43.731922   27145 main.go:141] libmachine: Detecting operating system of created instance...
	I0626 19:59:43.731937   27145 main.go:141] libmachine: Waiting for SSH to be available...
	I0626 19:59:43.731943   27145 main.go:141] libmachine: Getting to WaitForSSH function...
	I0626 19:59:43.731949   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:43.734227   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.734534   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:43.734565   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.734702   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:43.734877   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:43.735037   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:43.735190   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:43.735331   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 19:59:43.735760   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 19:59:43.735772   27145 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0626 19:59:43.864556   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:59:43.864586   27145 main.go:141] libmachine: Detecting the provisioner...
	I0626 19:59:43.864598   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:43.867092   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.867400   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:43.867425   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:43.867521   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:43.867694   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:43.867825   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:43.867931   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:43.868088   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 19:59:43.868474   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 19:59:43.868486   27145 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0626 19:59:44.002008   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2e95ab-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0626 19:59:44.002072   27145 main.go:141] libmachine: found compatible host: buildroot
	I0626 19:59:44.002088   27145 main.go:141] libmachine: Provisioning with buildroot...
	I0626 19:59:44.002099   27145 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 19:59:44.002338   27145 buildroot.go:166] provisioning hostname "multinode-050558"
	I0626 19:59:44.002370   27145 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 19:59:44.002560   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:44.005108   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.005427   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:44.005450   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.005601   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:44.005735   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.005873   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.005966   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:44.006104   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 19:59:44.006482   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 19:59:44.006501   27145 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-050558 && echo "multinode-050558" | sudo tee /etc/hostname
	I0626 19:59:44.148825   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-050558
	
	I0626 19:59:44.148855   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:44.151248   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.151576   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:44.151610   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.151771   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:44.151961   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.152120   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.152306   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:44.152490   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 19:59:44.152890   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 19:59:44.152915   27145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-050558' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-050558/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-050558' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 19:59:44.293029   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:59:44.293056   27145 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 19:59:44.293077   27145 buildroot.go:174] setting up certificates
	I0626 19:59:44.293086   27145 provision.go:83] configureAuth start
	I0626 19:59:44.293098   27145 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 19:59:44.293384   27145 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 19:59:44.295766   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.296061   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:44.296088   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.296223   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:44.298029   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.298356   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:44.298383   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.298504   27145 provision.go:138] copyHostCerts
	I0626 19:59:44.298539   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 19:59:44.298586   27145 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 19:59:44.298597   27145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 19:59:44.298665   27145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 19:59:44.298748   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 19:59:44.298772   27145 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 19:59:44.298781   27145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 19:59:44.298811   27145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 19:59:44.298862   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 19:59:44.298885   27145 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 19:59:44.298892   27145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 19:59:44.298918   27145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 19:59:44.298981   27145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.multinode-050558 san=[192.168.39.229 192.168.39.229 localhost 127.0.0.1 minikube multinode-050558]
	I0626 19:59:44.523907   27145 provision.go:172] copyRemoteCerts
	I0626 19:59:44.523960   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 19:59:44.523981   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:44.526496   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.526852   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:44.526877   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.527071   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:44.527269   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.527429   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:44.527585   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 19:59:44.623377   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 19:59:44.623445   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 19:59:44.645708   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 19:59:44.645795   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0626 19:59:44.667260   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 19:59:44.667325   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 19:59:44.688446   27145 provision.go:86] duration metric: configureAuth took 395.33496ms
	I0626 19:59:44.688475   27145 buildroot.go:189] setting minikube options for container-runtime
	I0626 19:59:44.688670   27145 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 19:59:44.688748   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:44.690935   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.691183   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:44.691211   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:44.691349   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:44.691557   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.691711   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:44.691858   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:44.692030   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 19:59:44.692395   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 19:59:44.692409   27145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 19:59:45.010860   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 19:59:45.010886   27145 main.go:141] libmachine: Checking connection to Docker...
	I0626 19:59:45.010894   27145 main.go:141] libmachine: (multinode-050558) Calling .GetURL
	I0626 19:59:45.012105   27145 main.go:141] libmachine: (multinode-050558) DBG | Using libvirt version 6000000
	I0626 19:59:45.014083   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.014373   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.014400   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.014556   27145 main.go:141] libmachine: Docker is up and running!
	I0626 19:59:45.014568   27145 main.go:141] libmachine: Reticulating splines...
	I0626 19:59:45.014573   27145 client.go:171] LocalClient.Create took 22.358312997s
	I0626 19:59:45.014602   27145 start.go:167] duration metric: libmachine.API.Create for "multinode-050558" took 22.35838816s
	I0626 19:59:45.014614   27145 start.go:300] post-start starting for "multinode-050558" (driver="kvm2")
	I0626 19:59:45.014625   27145 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 19:59:45.014656   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:45.014909   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 19:59:45.014934   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:45.017284   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.017647   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.017676   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.017772   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:45.017933   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:45.018083   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:45.018239   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 19:59:45.114688   27145 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 19:59:45.118933   27145 command_runner.go:130] > NAME=Buildroot
	I0626 19:59:45.118951   27145 command_runner.go:130] > VERSION=2021.02.12-1-ge2e95ab-dirty
	I0626 19:59:45.118958   27145 command_runner.go:130] > ID=buildroot
	I0626 19:59:45.118965   27145 command_runner.go:130] > VERSION_ID=2021.02.12
	I0626 19:59:45.118972   27145 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0626 19:59:45.119006   27145 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 19:59:45.119022   27145 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 19:59:45.119094   27145 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 19:59:45.119196   27145 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 19:59:45.119209   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /etc/ssl/certs/144432.pem
	I0626 19:59:45.119292   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 19:59:45.127723   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 19:59:45.150272   27145 start.go:303] post-start completed in 135.64638ms
	I0626 19:59:45.150314   27145 main.go:141] libmachine: (multinode-050558) Calling .GetConfigRaw
	I0626 19:59:45.150851   27145 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 19:59:45.153167   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.153509   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.153539   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.153745   27145 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 19:59:45.153904   27145 start.go:128] duration metric: createHost completed in 22.514546883s
	I0626 19:59:45.153922   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:45.155777   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.156135   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.156164   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.156309   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:45.156475   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:45.156623   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:45.156754   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:45.156919   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 19:59:45.157285   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 19:59:45.157297   27145 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 19:59:45.290219   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687809585.267250942
	
	I0626 19:59:45.290243   27145 fix.go:206] guest clock: 1687809585.267250942
	I0626 19:59:45.290252   27145 fix.go:219] Guest: 2023-06-26 19:59:45.267250942 +0000 UTC Remote: 2023-06-26 19:59:45.153913625 +0000 UTC m=+22.614479800 (delta=113.337317ms)
	I0626 19:59:45.290275   27145 fix.go:190] guest clock delta is within tolerance: 113.337317ms
	I0626 19:59:45.290281   27145 start.go:83] releasing machines lock for "multinode-050558", held for 22.651032025s
	I0626 19:59:45.290301   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:45.290694   27145 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 19:59:45.293288   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.293602   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.293633   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.293756   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:45.294242   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:45.294446   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 19:59:45.294518   27145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 19:59:45.294563   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:45.294640   27145 ssh_runner.go:195] Run: cat /version.json
	I0626 19:59:45.294665   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 19:59:45.296843   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.297134   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.297161   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.297184   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.297232   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:45.297435   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:45.297615   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:45.297624   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:45.297647   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:45.297785   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 19:59:45.297803   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 19:59:45.297940   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 19:59:45.298074   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 19:59:45.298188   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 19:59:45.386118   27145 command_runner.go:130] > {"iso_version": "v1.30.1-1687455737-16703", "kicbase_version": "v0.0.39-1687367788-16703", "minikube_version": "v1.30.1", "commit": "698b58f2be1e4f36ba4ac648454cf7f7b59eb6ea"}
	I0626 19:59:45.386253   27145 ssh_runner.go:195] Run: systemctl --version
	I0626 19:59:45.411567   27145 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 19:59:45.412076   27145 command_runner.go:130] > systemd 247 (247)
	I0626 19:59:45.412098   27145 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0626 19:59:45.412153   27145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 19:59:45.568802   27145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 19:59:45.574786   27145 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0626 19:59:45.574840   27145 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 19:59:45.574891   27145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 19:59:45.589094   27145 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0626 19:59:45.589149   27145 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 19:59:45.589158   27145 start.go:466] detecting cgroup driver to use...
	I0626 19:59:45.589223   27145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 19:59:45.602236   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 19:59:45.614324   27145 docker.go:196] disabling cri-docker service (if available) ...
	I0626 19:59:45.614404   27145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 19:59:45.626704   27145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 19:59:45.638675   27145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 19:59:45.651626   27145 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0626 19:59:45.735439   27145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 19:59:45.854096   27145 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0626 19:59:45.854128   27145 docker.go:212] disabling docker service ...
	I0626 19:59:45.854183   27145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 19:59:45.867528   27145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 19:59:45.878252   27145 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0626 19:59:45.878377   27145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 19:59:45.891045   27145 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0626 19:59:45.986943   27145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 19:59:45.999699   27145 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0626 19:59:46.000112   27145 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0626 19:59:46.096961   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 19:59:46.109491   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 19:59:46.126017   27145 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 19:59:46.126046   27145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 19:59:46.126085   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:59:46.134670   27145 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 19:59:46.134724   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:59:46.143289   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:59:46.151796   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:59:46.160204   27145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 19:59:46.169504   27145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 19:59:46.177085   27145 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 19:59:46.177180   27145 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 19:59:46.177230   27145 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 19:59:46.189086   27145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 19:59:46.197101   27145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 19:59:46.300402   27145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 19:59:46.465718   27145 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 19:59:46.465779   27145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 19:59:46.470247   27145 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 19:59:46.470262   27145 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 19:59:46.470268   27145 command_runner.go:130] > Device: 16h/22d	Inode: 705         Links: 1
	I0626 19:59:46.470274   27145 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 19:59:46.470279   27145 command_runner.go:130] > Access: 2023-06-26 19:59:46.428439491 +0000
	I0626 19:59:46.470286   27145 command_runner.go:130] > Modify: 2023-06-26 19:59:46.428439491 +0000
	I0626 19:59:46.470291   27145 command_runner.go:130] > Change: 2023-06-26 19:59:46.428439491 +0000
	I0626 19:59:46.470297   27145 command_runner.go:130] >  Birth: -
	I0626 19:59:46.470719   27145 start.go:534] Will wait 60s for crictl version
	I0626 19:59:46.470754   27145 ssh_runner.go:195] Run: which crictl
	I0626 19:59:46.474362   27145 command_runner.go:130] > /usr/bin/crictl
	I0626 19:59:46.474568   27145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 19:59:46.504787   27145 command_runner.go:130] > Version:  0.1.0
	I0626 19:59:46.504821   27145 command_runner.go:130] > RuntimeName:  cri-o
	I0626 19:59:46.504828   27145 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0626 19:59:46.504836   27145 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0626 19:59:46.506224   27145 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 19:59:46.506298   27145 ssh_runner.go:195] Run: crio --version
	I0626 19:59:46.550933   27145 command_runner.go:130] > crio version 1.24.1
	I0626 19:59:46.550956   27145 command_runner.go:130] > Version:          1.24.1
	I0626 19:59:46.550965   27145 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 19:59:46.550972   27145 command_runner.go:130] > GitTreeState:     dirty
	I0626 19:59:46.550979   27145 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 19:59:46.550987   27145 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 19:59:46.550994   27145 command_runner.go:130] > Compiler:         gc
	I0626 19:59:46.551001   27145 command_runner.go:130] > Platform:         linux/amd64
	I0626 19:59:46.551009   27145 command_runner.go:130] > Linkmode:         dynamic
	I0626 19:59:46.551025   27145 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 19:59:46.551036   27145 command_runner.go:130] > SeccompEnabled:   true
	I0626 19:59:46.551048   27145 command_runner.go:130] > AppArmorEnabled:  false
	I0626 19:59:46.552316   27145 ssh_runner.go:195] Run: crio --version
	I0626 19:59:46.599645   27145 command_runner.go:130] > crio version 1.24.1
	I0626 19:59:46.599662   27145 command_runner.go:130] > Version:          1.24.1
	I0626 19:59:46.599697   27145 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 19:59:46.599711   27145 command_runner.go:130] > GitTreeState:     dirty
	I0626 19:59:46.599717   27145 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 19:59:46.599721   27145 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 19:59:46.599725   27145 command_runner.go:130] > Compiler:         gc
	I0626 19:59:46.599729   27145 command_runner.go:130] > Platform:         linux/amd64
	I0626 19:59:46.599734   27145 command_runner.go:130] > Linkmode:         dynamic
	I0626 19:59:46.599741   27145 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 19:59:46.599746   27145 command_runner.go:130] > SeccompEnabled:   true
	I0626 19:59:46.599750   27145 command_runner.go:130] > AppArmorEnabled:  false
	I0626 19:59:46.603563   27145 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 19:59:46.604858   27145 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 19:59:46.607370   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:46.607694   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 19:59:46.607724   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 19:59:46.607903   27145 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 19:59:46.612057   27145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 19:59:46.624575   27145 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:59:46.624620   27145 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 19:59:46.653087   27145 command_runner.go:130] > {
	I0626 19:59:46.653111   27145 command_runner.go:130] >   "images": [
	I0626 19:59:46.653118   27145 command_runner.go:130] >   ]
	I0626 19:59:46.653123   27145 command_runner.go:130] > }
	I0626 19:59:46.654458   27145 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 19:59:46.654517   27145 ssh_runner.go:195] Run: which lz4
	I0626 19:59:46.658767   27145 command_runner.go:130] > /usr/bin/lz4
	I0626 19:59:46.658989   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0626 19:59:46.659062   27145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 19:59:46.663697   27145 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 19:59:46.663725   27145 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 19:59:46.663739   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 19:59:48.414019   27145 crio.go:444] Took 1.754959 seconds to copy over tarball
	I0626 19:59:48.414104   27145 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 19:59:51.026352   27145 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.612205443s)
	I0626 19:59:51.026391   27145 crio.go:451] Took 2.612343 seconds to extract the tarball
	I0626 19:59:51.026400   27145 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 19:59:51.064346   27145 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 19:59:51.122178   27145 command_runner.go:130] > {
	I0626 19:59:51.122200   27145 command_runner.go:130] >   "images": [
	I0626 19:59:51.122204   27145 command_runner.go:130] >     {
	I0626 19:59:51.122212   27145 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0626 19:59:51.122216   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122222   27145 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0626 19:59:51.122226   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122230   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122237   27145 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0626 19:59:51.122244   27145 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0626 19:59:51.122249   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122253   27145 command_runner.go:130] >       "size": "65249302",
	I0626 19:59:51.122257   27145 command_runner.go:130] >       "uid": null,
	I0626 19:59:51.122263   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122271   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122279   27145 command_runner.go:130] >     },
	I0626 19:59:51.122306   27145 command_runner.go:130] >     {
	I0626 19:59:51.122318   27145 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0626 19:59:51.122322   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122327   27145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0626 19:59:51.122332   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122337   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122346   27145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0626 19:59:51.122355   27145 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0626 19:59:51.122359   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122365   27145 command_runner.go:130] >       "size": "31470524",
	I0626 19:59:51.122369   27145 command_runner.go:130] >       "uid": null,
	I0626 19:59:51.122376   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122391   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122397   27145 command_runner.go:130] >     },
	I0626 19:59:51.122400   27145 command_runner.go:130] >     {
	I0626 19:59:51.122408   27145 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0626 19:59:51.122414   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122419   27145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0626 19:59:51.122425   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122429   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122438   27145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0626 19:59:51.122447   27145 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0626 19:59:51.122452   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122458   27145 command_runner.go:130] >       "size": "53621675",
	I0626 19:59:51.122462   27145 command_runner.go:130] >       "uid": null,
	I0626 19:59:51.122468   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122472   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122478   27145 command_runner.go:130] >     },
	I0626 19:59:51.122482   27145 command_runner.go:130] >     {
	I0626 19:59:51.122490   27145 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0626 19:59:51.122497   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122503   27145 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0626 19:59:51.122508   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122513   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122521   27145 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0626 19:59:51.122530   27145 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0626 19:59:51.122536   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122540   27145 command_runner.go:130] >       "size": "297083935",
	I0626 19:59:51.122546   27145 command_runner.go:130] >       "uid": {
	I0626 19:59:51.122551   27145 command_runner.go:130] >         "value": "0"
	I0626 19:59:51.122559   27145 command_runner.go:130] >       },
	I0626 19:59:51.122565   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122569   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122575   27145 command_runner.go:130] >     },
	I0626 19:59:51.122578   27145 command_runner.go:130] >     {
	I0626 19:59:51.122587   27145 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0626 19:59:51.122592   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122597   27145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0626 19:59:51.122603   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122607   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122616   27145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0626 19:59:51.122625   27145 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0626 19:59:51.122632   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122636   27145 command_runner.go:130] >       "size": "122065872",
	I0626 19:59:51.122642   27145 command_runner.go:130] >       "uid": {
	I0626 19:59:51.122650   27145 command_runner.go:130] >         "value": "0"
	I0626 19:59:51.122656   27145 command_runner.go:130] >       },
	I0626 19:59:51.122659   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122666   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122670   27145 command_runner.go:130] >     },
	I0626 19:59:51.122675   27145 command_runner.go:130] >     {
	I0626 19:59:51.122681   27145 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0626 19:59:51.122687   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122692   27145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0626 19:59:51.122698   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122702   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122712   27145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0626 19:59:51.122721   27145 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0626 19:59:51.122727   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122731   27145 command_runner.go:130] >       "size": "113919286",
	I0626 19:59:51.122737   27145 command_runner.go:130] >       "uid": {
	I0626 19:59:51.122741   27145 command_runner.go:130] >         "value": "0"
	I0626 19:59:51.122746   27145 command_runner.go:130] >       },
	I0626 19:59:51.122750   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122757   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122760   27145 command_runner.go:130] >     },
	I0626 19:59:51.122766   27145 command_runner.go:130] >     {
	I0626 19:59:51.122772   27145 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0626 19:59:51.122778   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122782   27145 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0626 19:59:51.122788   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122792   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122802   27145 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0626 19:59:51.122811   27145 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0626 19:59:51.122817   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122821   27145 command_runner.go:130] >       "size": "72713623",
	I0626 19:59:51.122827   27145 command_runner.go:130] >       "uid": null,
	I0626 19:59:51.122831   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122836   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122840   27145 command_runner.go:130] >     },
	I0626 19:59:51.122845   27145 command_runner.go:130] >     {
	I0626 19:59:51.122853   27145 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0626 19:59:51.122859   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122864   27145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0626 19:59:51.122870   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122874   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122880   27145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0626 19:59:51.122897   27145 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0626 19:59:51.122903   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122907   27145 command_runner.go:130] >       "size": "59811126",
	I0626 19:59:51.122913   27145 command_runner.go:130] >       "uid": {
	I0626 19:59:51.122917   27145 command_runner.go:130] >         "value": "0"
	I0626 19:59:51.122923   27145 command_runner.go:130] >       },
	I0626 19:59:51.122927   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.122933   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.122937   27145 command_runner.go:130] >     },
	I0626 19:59:51.122942   27145 command_runner.go:130] >     {
	I0626 19:59:51.122948   27145 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0626 19:59:51.122955   27145 command_runner.go:130] >       "repoTags": [
	I0626 19:59:51.122959   27145 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0626 19:59:51.122965   27145 command_runner.go:130] >       ],
	I0626 19:59:51.122969   27145 command_runner.go:130] >       "repoDigests": [
	I0626 19:59:51.122978   27145 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0626 19:59:51.122996   27145 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0626 19:59:51.123002   27145 command_runner.go:130] >       ],
	I0626 19:59:51.123007   27145 command_runner.go:130] >       "size": "750414",
	I0626 19:59:51.123012   27145 command_runner.go:130] >       "uid": {
	I0626 19:59:51.123017   27145 command_runner.go:130] >         "value": "65535"
	I0626 19:59:51.123022   27145 command_runner.go:130] >       },
	I0626 19:59:51.123026   27145 command_runner.go:130] >       "username": "",
	I0626 19:59:51.123032   27145 command_runner.go:130] >       "spec": null
	I0626 19:59:51.123036   27145 command_runner.go:130] >     }
	I0626 19:59:51.123042   27145 command_runner.go:130] >   ]
	I0626 19:59:51.123045   27145 command_runner.go:130] > }
	I0626 19:59:51.123139   27145 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 19:59:51.123149   27145 cache_images.go:84] Images are preloaded, skipping loading
	I0626 19:59:51.123204   27145 ssh_runner.go:195] Run: crio config
	I0626 19:59:51.174031   27145 command_runner.go:130] ! time="2023-06-26 19:59:51.162672766Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0626 19:59:51.174093   27145 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 19:59:51.182977   27145 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 19:59:51.183013   27145 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 19:59:51.183020   27145 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 19:59:51.183023   27145 command_runner.go:130] > #
	I0626 19:59:51.183031   27145 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 19:59:51.183040   27145 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 19:59:51.183051   27145 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 19:59:51.183064   27145 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 19:59:51.183077   27145 command_runner.go:130] > # reload'.
	I0626 19:59:51.183087   27145 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 19:59:51.183097   27145 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 19:59:51.183109   27145 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 19:59:51.183118   27145 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 19:59:51.183141   27145 command_runner.go:130] > [crio]
	I0626 19:59:51.183153   27145 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 19:59:51.183161   27145 command_runner.go:130] > # containers images, in this directory.
	I0626 19:59:51.183169   27145 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0626 19:59:51.183182   27145 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 19:59:51.183193   27145 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0626 19:59:51.183202   27145 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 19:59:51.183214   27145 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 19:59:51.183224   27145 command_runner.go:130] > storage_driver = "overlay"
	I0626 19:59:51.183236   27145 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 19:59:51.183248   27145 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 19:59:51.183255   27145 command_runner.go:130] > storage_option = [
	I0626 19:59:51.183265   27145 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0626 19:59:51.183270   27145 command_runner.go:130] > ]
	I0626 19:59:51.183281   27145 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 19:59:51.183293   27145 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 19:59:51.183303   27145 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 19:59:51.183316   27145 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 19:59:51.183330   27145 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 19:59:51.183340   27145 command_runner.go:130] > # always happen on a node reboot
	I0626 19:59:51.183348   27145 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 19:59:51.183357   27145 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 19:59:51.183362   27145 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 19:59:51.183373   27145 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 19:59:51.183379   27145 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 19:59:51.183387   27145 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 19:59:51.183397   27145 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 19:59:51.183403   27145 command_runner.go:130] > # internal_wipe = true
	I0626 19:59:51.183409   27145 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 19:59:51.183417   27145 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 19:59:51.183423   27145 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 19:59:51.183428   27145 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 19:59:51.183436   27145 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 19:59:51.183440   27145 command_runner.go:130] > [crio.api]
	I0626 19:59:51.183447   27145 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 19:59:51.183451   27145 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 19:59:51.183459   27145 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 19:59:51.183463   27145 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 19:59:51.183470   27145 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 19:59:51.183477   27145 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 19:59:51.183481   27145 command_runner.go:130] > # stream_port = "0"
	I0626 19:59:51.183488   27145 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 19:59:51.183492   27145 command_runner.go:130] > # stream_enable_tls = false
	I0626 19:59:51.183497   27145 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 19:59:51.183504   27145 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 19:59:51.183509   27145 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 19:59:51.183515   27145 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 19:59:51.183521   27145 command_runner.go:130] > # minutes.
	I0626 19:59:51.183524   27145 command_runner.go:130] > # stream_tls_cert = ""
	I0626 19:59:51.183530   27145 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 19:59:51.183538   27145 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 19:59:51.183543   27145 command_runner.go:130] > # stream_tls_key = ""
	I0626 19:59:51.183551   27145 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 19:59:51.183557   27145 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 19:59:51.183565   27145 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 19:59:51.183569   27145 command_runner.go:130] > # stream_tls_ca = ""
	I0626 19:59:51.183578   27145 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 19:59:51.183583   27145 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0626 19:59:51.183591   27145 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 19:59:51.183595   27145 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0626 19:59:51.183615   27145 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 19:59:51.183623   27145 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 19:59:51.183627   27145 command_runner.go:130] > [crio.runtime]
	I0626 19:59:51.183635   27145 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 19:59:51.183640   27145 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 19:59:51.183646   27145 command_runner.go:130] > # "nofile=1024:2048"
	I0626 19:59:51.183652   27145 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 19:59:51.183658   27145 command_runner.go:130] > # default_ulimits = [
	I0626 19:59:51.183661   27145 command_runner.go:130] > # ]
	I0626 19:59:51.183669   27145 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 19:59:51.183675   27145 command_runner.go:130] > # no_pivot = false
	I0626 19:59:51.183680   27145 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 19:59:51.183686   27145 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 19:59:51.183693   27145 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 19:59:51.183699   27145 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 19:59:51.183706   27145 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 19:59:51.183714   27145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 19:59:51.183720   27145 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0626 19:59:51.183724   27145 command_runner.go:130] > # Cgroup setting for conmon
	I0626 19:59:51.183731   27145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 19:59:51.183737   27145 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 19:59:51.183743   27145 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 19:59:51.183750   27145 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 19:59:51.183757   27145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 19:59:51.183763   27145 command_runner.go:130] > conmon_env = [
	I0626 19:59:51.183768   27145 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0626 19:59:51.183771   27145 command_runner.go:130] > ]
	I0626 19:59:51.183779   27145 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 19:59:51.183784   27145 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 19:59:51.183791   27145 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 19:59:51.183796   27145 command_runner.go:130] > # default_env = [
	I0626 19:59:51.183800   27145 command_runner.go:130] > # ]
	I0626 19:59:51.183806   27145 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 19:59:51.183814   27145 command_runner.go:130] > # selinux = false
	I0626 19:59:51.183821   27145 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 19:59:51.183829   27145 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 19:59:51.183834   27145 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 19:59:51.183841   27145 command_runner.go:130] > # seccomp_profile = ""
	I0626 19:59:51.183847   27145 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 19:59:51.183855   27145 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 19:59:51.183861   27145 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 19:59:51.183867   27145 command_runner.go:130] > # which might increase security.
	I0626 19:59:51.183872   27145 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0626 19:59:51.183878   27145 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 19:59:51.183883   27145 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 19:59:51.183891   27145 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 19:59:51.183897   27145 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 19:59:51.183904   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 19:59:51.183909   27145 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 19:59:51.183916   27145 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 19:59:51.183921   27145 command_runner.go:130] > # the cgroup blockio controller.
	I0626 19:59:51.183927   27145 command_runner.go:130] > # blockio_config_file = ""
	I0626 19:59:51.183933   27145 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 19:59:51.183939   27145 command_runner.go:130] > # irqbalance daemon.
	I0626 19:59:51.183944   27145 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 19:59:51.183950   27145 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 19:59:51.183957   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 19:59:51.183961   27145 command_runner.go:130] > # rdt_config_file = ""
	I0626 19:59:51.183972   27145 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 19:59:51.183978   27145 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 19:59:51.183984   27145 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 19:59:51.183989   27145 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 19:59:51.183995   27145 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 19:59:51.184003   27145 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 19:59:51.184006   27145 command_runner.go:130] > # will be added.
	I0626 19:59:51.184012   27145 command_runner.go:130] > # default_capabilities = [
	I0626 19:59:51.184019   27145 command_runner.go:130] > # 	"CHOWN",
	I0626 19:59:51.184025   27145 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 19:59:51.184029   27145 command_runner.go:130] > # 	"FSETID",
	I0626 19:59:51.184032   27145 command_runner.go:130] > # 	"FOWNER",
	I0626 19:59:51.184039   27145 command_runner.go:130] > # 	"SETGID",
	I0626 19:59:51.184042   27145 command_runner.go:130] > # 	"SETUID",
	I0626 19:59:51.184049   27145 command_runner.go:130] > # 	"SETPCAP",
	I0626 19:59:51.184052   27145 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 19:59:51.184057   27145 command_runner.go:130] > # 	"KILL",
	I0626 19:59:51.184060   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184066   27145 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 19:59:51.184074   27145 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 19:59:51.184078   27145 command_runner.go:130] > # default_sysctls = [
	I0626 19:59:51.184082   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184087   27145 command_runner.go:130] > # List of devices on the host that a
	I0626 19:59:51.184095   27145 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 19:59:51.184099   27145 command_runner.go:130] > # allowed_devices = [
	I0626 19:59:51.184105   27145 command_runner.go:130] > # 	"/dev/fuse",
	I0626 19:59:51.184110   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184116   27145 command_runner.go:130] > # List of additional devices. specified as
	I0626 19:59:51.184123   27145 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 19:59:51.184130   27145 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 19:59:51.184145   27145 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 19:59:51.184154   27145 command_runner.go:130] > # additional_devices = [
	I0626 19:59:51.184159   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184170   27145 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 19:59:51.184177   27145 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 19:59:51.184187   27145 command_runner.go:130] > # 	"/etc/cdi",
	I0626 19:59:51.184192   27145 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 19:59:51.184197   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184209   27145 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 19:59:51.184222   27145 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 19:59:51.184231   27145 command_runner.go:130] > # Defaults to false.
	I0626 19:59:51.184239   27145 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 19:59:51.184252   27145 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 19:59:51.184261   27145 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 19:59:51.184269   27145 command_runner.go:130] > # hooks_dir = [
	I0626 19:59:51.184276   27145 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 19:59:51.184279   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184285   27145 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 19:59:51.184293   27145 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 19:59:51.184298   27145 command_runner.go:130] > # its default mounts from the following two files:
	I0626 19:59:51.184304   27145 command_runner.go:130] > #
	I0626 19:59:51.184310   27145 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 19:59:51.184319   27145 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 19:59:51.184324   27145 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 19:59:51.184330   27145 command_runner.go:130] > #
	I0626 19:59:51.184336   27145 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 19:59:51.184344   27145 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 19:59:51.184350   27145 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 19:59:51.184357   27145 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 19:59:51.184361   27145 command_runner.go:130] > #
	I0626 19:59:51.184365   27145 command_runner.go:130] > # default_mounts_file = ""
	I0626 19:59:51.184373   27145 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 19:59:51.184381   27145 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 19:59:51.184387   27145 command_runner.go:130] > pids_limit = 1024
	I0626 19:59:51.184393   27145 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 19:59:51.184401   27145 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 19:59:51.184407   27145 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 19:59:51.184417   27145 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 19:59:51.184421   27145 command_runner.go:130] > # log_size_max = -1
	I0626 19:59:51.184429   27145 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 19:59:51.184435   27145 command_runner.go:130] > # log_to_journald = false
	I0626 19:59:51.184441   27145 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 19:59:51.184446   27145 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 19:59:51.184453   27145 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 19:59:51.184458   27145 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 19:59:51.184465   27145 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 19:59:51.184469   27145 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 19:59:51.184476   27145 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 19:59:51.184482   27145 command_runner.go:130] > # read_only = false
	I0626 19:59:51.184488   27145 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 19:59:51.184497   27145 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 19:59:51.184501   27145 command_runner.go:130] > # live configuration reload.
	I0626 19:59:51.184506   27145 command_runner.go:130] > # log_level = "info"
	I0626 19:59:51.184512   27145 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 19:59:51.184519   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 19:59:51.184523   27145 command_runner.go:130] > # log_filter = ""
	I0626 19:59:51.184529   27145 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 19:59:51.184536   27145 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 19:59:51.184540   27145 command_runner.go:130] > # separated by comma.
	I0626 19:59:51.184547   27145 command_runner.go:130] > # uid_mappings = ""
	I0626 19:59:51.184553   27145 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 19:59:51.184561   27145 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 19:59:51.184567   27145 command_runner.go:130] > # separated by comma.
	I0626 19:59:51.184571   27145 command_runner.go:130] > # gid_mappings = ""
	I0626 19:59:51.184579   27145 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 19:59:51.184586   27145 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 19:59:51.184605   27145 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 19:59:51.184614   27145 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 19:59:51.184623   27145 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 19:59:51.184631   27145 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 19:59:51.184637   27145 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 19:59:51.184644   27145 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 19:59:51.184650   27145 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 19:59:51.184659   27145 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 19:59:51.184665   27145 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 19:59:51.184671   27145 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 19:59:51.184676   27145 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 19:59:51.184684   27145 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 19:59:51.184689   27145 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 19:59:51.184697   27145 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 19:59:51.184704   27145 command_runner.go:130] > drop_infra_ctr = false
	I0626 19:59:51.184712   27145 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 19:59:51.184718   27145 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 19:59:51.184725   27145 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 19:59:51.184730   27145 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 19:59:51.184736   27145 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 19:59:51.184743   27145 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 19:59:51.184748   27145 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 19:59:51.184757   27145 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 19:59:51.184761   27145 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0626 19:59:51.184767   27145 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 19:59:51.184774   27145 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 19:59:51.184780   27145 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 19:59:51.184786   27145 command_runner.go:130] > # default_runtime = "runc"
	I0626 19:59:51.184791   27145 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 19:59:51.184800   27145 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 19:59:51.184809   27145 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 19:59:51.184816   27145 command_runner.go:130] > # creation as a file is not desired either.
	I0626 19:59:51.184824   27145 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 19:59:51.184831   27145 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 19:59:51.184836   27145 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 19:59:51.184839   27145 command_runner.go:130] > # ]
	I0626 19:59:51.184845   27145 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 19:59:51.184854   27145 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 19:59:51.184862   27145 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 19:59:51.184868   27145 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 19:59:51.184871   27145 command_runner.go:130] > #
	I0626 19:59:51.184876   27145 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 19:59:51.184881   27145 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 19:59:51.184886   27145 command_runner.go:130] > #  runtime_type = "oci"
	I0626 19:59:51.184890   27145 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 19:59:51.184897   27145 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 19:59:51.184902   27145 command_runner.go:130] > #  allowed_annotations = []
	I0626 19:59:51.184907   27145 command_runner.go:130] > # Where:
	I0626 19:59:51.184912   27145 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 19:59:51.184920   27145 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 19:59:51.184926   27145 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 19:59:51.184934   27145 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 19:59:51.184938   27145 command_runner.go:130] > #   in $PATH.
	I0626 19:59:51.184944   27145 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 19:59:51.184951   27145 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 19:59:51.184957   27145 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 19:59:51.184963   27145 command_runner.go:130] > #   state.
	I0626 19:59:51.184973   27145 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 19:59:51.184981   27145 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 19:59:51.184987   27145 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 19:59:51.184996   27145 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 19:59:51.185002   27145 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 19:59:51.185011   27145 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 19:59:51.185015   27145 command_runner.go:130] > #   The currently recognized values are:
	I0626 19:59:51.185024   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 19:59:51.185030   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 19:59:51.185040   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 19:59:51.185048   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 19:59:51.185055   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 19:59:51.185063   27145 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 19:59:51.185069   27145 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 19:59:51.185078   27145 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 19:59:51.185083   27145 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 19:59:51.185089   27145 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 19:59:51.185094   27145 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0626 19:59:51.185100   27145 command_runner.go:130] > runtime_type = "oci"
	I0626 19:59:51.185103   27145 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 19:59:51.185107   27145 command_runner.go:130] > runtime_config_path = ""
	I0626 19:59:51.185111   27145 command_runner.go:130] > monitor_path = ""
	I0626 19:59:51.185117   27145 command_runner.go:130] > monitor_cgroup = ""
	I0626 19:59:51.185121   27145 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 19:59:51.185128   27145 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 19:59:51.185133   27145 command_runner.go:130] > # running containers
	I0626 19:59:51.185137   27145 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 19:59:51.185144   27145 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 19:59:51.185193   27145 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 19:59:51.185207   27145 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 19:59:51.185215   27145 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 19:59:51.185221   27145 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 19:59:51.185231   27145 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 19:59:51.185238   27145 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 19:59:51.185248   27145 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 19:59:51.185258   27145 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 19:59:51.185266   27145 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 19:59:51.185275   27145 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 19:59:51.185281   27145 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 19:59:51.185288   27145 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 19:59:51.185298   27145 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 19:59:51.185303   27145 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 19:59:51.185314   27145 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 19:59:51.185324   27145 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 19:59:51.185332   27145 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 19:59:51.185341   27145 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 19:59:51.185345   27145 command_runner.go:130] > # Example:
	I0626 19:59:51.185350   27145 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 19:59:51.185356   27145 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 19:59:51.185362   27145 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 19:59:51.185369   27145 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 19:59:51.185384   27145 command_runner.go:130] > # cpuset = 0
	I0626 19:59:51.185394   27145 command_runner.go:130] > # cpushares = "0-1"
	I0626 19:59:51.185401   27145 command_runner.go:130] > # Where:
	I0626 19:59:51.185411   27145 command_runner.go:130] > # The workload name is workload-type.
	I0626 19:59:51.185421   27145 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 19:59:51.185433   27145 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 19:59:51.185441   27145 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 19:59:51.185456   27145 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 19:59:51.185468   27145 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 19:59:51.185473   27145 command_runner.go:130] > # 
	I0626 19:59:51.185484   27145 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 19:59:51.185491   27145 command_runner.go:130] > #
	I0626 19:59:51.185499   27145 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 19:59:51.185508   27145 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 19:59:51.185514   27145 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 19:59:51.185522   27145 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 19:59:51.185528   27145 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 19:59:51.185534   27145 command_runner.go:130] > [crio.image]
	I0626 19:59:51.185540   27145 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 19:59:51.185546   27145 command_runner.go:130] > # default_transport = "docker://"
	I0626 19:59:51.185553   27145 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 19:59:51.185576   27145 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 19:59:51.185586   27145 command_runner.go:130] > # global_auth_file = ""
	I0626 19:59:51.185591   27145 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 19:59:51.185598   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 19:59:51.185604   27145 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 19:59:51.185613   27145 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 19:59:51.185621   27145 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 19:59:51.185626   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 19:59:51.185633   27145 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 19:59:51.185639   27145 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 19:59:51.185647   27145 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 19:59:51.185655   27145 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 19:59:51.185661   27145 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 19:59:51.185665   27145 command_runner.go:130] > # pause_command = "/pause"
	I0626 19:59:51.185671   27145 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 19:59:51.185677   27145 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 19:59:51.185683   27145 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 19:59:51.185688   27145 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 19:59:51.185693   27145 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 19:59:51.185697   27145 command_runner.go:130] > # signature_policy = ""
	I0626 19:59:51.185703   27145 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 19:59:51.185708   27145 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 19:59:51.185712   27145 command_runner.go:130] > # changing them here.
	I0626 19:59:51.185716   27145 command_runner.go:130] > # insecure_registries = [
	I0626 19:59:51.185720   27145 command_runner.go:130] > # ]
	I0626 19:59:51.185734   27145 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 19:59:51.185741   27145 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 19:59:51.185746   27145 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 19:59:51.185752   27145 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 19:59:51.185758   27145 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 19:59:51.185767   27145 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 19:59:51.185773   27145 command_runner.go:130] > # CNI plugins.
	I0626 19:59:51.185777   27145 command_runner.go:130] > [crio.network]
	I0626 19:59:51.185782   27145 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 19:59:51.185787   27145 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 19:59:51.185796   27145 command_runner.go:130] > # cni_default_network = ""
	I0626 19:59:51.185801   27145 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 19:59:51.185806   27145 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 19:59:51.185811   27145 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 19:59:51.185814   27145 command_runner.go:130] > # plugin_dirs = [
	I0626 19:59:51.185818   27145 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 19:59:51.185821   27145 command_runner.go:130] > # ]
	I0626 19:59:51.185826   27145 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 19:59:51.185830   27145 command_runner.go:130] > [crio.metrics]
	I0626 19:59:51.185834   27145 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 19:59:51.185838   27145 command_runner.go:130] > enable_metrics = true
	I0626 19:59:51.185842   27145 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 19:59:51.185846   27145 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 19:59:51.185851   27145 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 19:59:51.185859   27145 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 19:59:51.185864   27145 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 19:59:51.185868   27145 command_runner.go:130] > # metrics_collectors = [
	I0626 19:59:51.185872   27145 command_runner.go:130] > # 	"operations",
	I0626 19:59:51.185879   27145 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 19:59:51.185886   27145 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 19:59:51.185890   27145 command_runner.go:130] > # 	"operations_errors",
	I0626 19:59:51.185894   27145 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 19:59:51.185900   27145 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 19:59:51.185904   27145 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 19:59:51.185908   27145 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 19:59:51.185912   27145 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 19:59:51.185918   27145 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 19:59:51.185922   27145 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 19:59:51.185928   27145 command_runner.go:130] > # 	"containers_oom_total",
	I0626 19:59:51.185931   27145 command_runner.go:130] > # 	"containers_oom",
	I0626 19:59:51.185935   27145 command_runner.go:130] > # 	"processes_defunct",
	I0626 19:59:51.185939   27145 command_runner.go:130] > # 	"operations_total",
	I0626 19:59:51.185946   27145 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 19:59:51.185950   27145 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 19:59:51.185956   27145 command_runner.go:130] > # 	"operations_errors_total",
	I0626 19:59:51.185960   27145 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 19:59:51.185976   27145 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 19:59:51.185982   27145 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 19:59:51.185987   27145 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 19:59:51.185991   27145 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 19:59:51.185997   27145 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 19:59:51.186000   27145 command_runner.go:130] > # ]
	I0626 19:59:51.186008   27145 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 19:59:51.186011   27145 command_runner.go:130] > # metrics_port = 9090
	I0626 19:59:51.186018   27145 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 19:59:51.186022   27145 command_runner.go:130] > # metrics_socket = ""
	I0626 19:59:51.186029   27145 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 19:59:51.186035   27145 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 19:59:51.186041   27145 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 19:59:51.186048   27145 command_runner.go:130] > # certificate on any modification event.
	I0626 19:59:51.186051   27145 command_runner.go:130] > # metrics_cert = ""
	I0626 19:59:51.186058   27145 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 19:59:51.186063   27145 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 19:59:51.186067   27145 command_runner.go:130] > # metrics_key = ""
	I0626 19:59:51.186073   27145 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 19:59:51.186080   27145 command_runner.go:130] > [crio.tracing]
	I0626 19:59:51.186085   27145 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 19:59:51.186090   27145 command_runner.go:130] > # enable_tracing = false
	I0626 19:59:51.186095   27145 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 19:59:51.186103   27145 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 19:59:51.186108   27145 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 19:59:51.186116   27145 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 19:59:51.186121   27145 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 19:59:51.186127   27145 command_runner.go:130] > [crio.stats]
	I0626 19:59:51.186133   27145 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 19:59:51.186138   27145 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 19:59:51.186145   27145 command_runner.go:130] > # stats_collection_period = 0
	I0626 19:59:51.186233   27145 cni.go:84] Creating CNI manager for ""
	I0626 19:59:51.186250   27145 cni.go:137] 1 nodes found, recommending kindnet
	I0626 19:59:51.186260   27145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 19:59:51.186285   27145 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-050558 NodeName:multinode-050558 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 19:59:51.186419   27145 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-050558"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 19:59:51.186483   27145 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-050558 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 19:59:51.186529   27145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 19:59:51.196238   27145 command_runner.go:130] > kubeadm
	I0626 19:59:51.196257   27145 command_runner.go:130] > kubectl
	I0626 19:59:51.196263   27145 command_runner.go:130] > kubelet
	I0626 19:59:51.196286   27145 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 19:59:51.196339   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 19:59:51.205453   27145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 19:59:51.220714   27145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 19:59:51.236036   27145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0626 19:59:51.251222   27145 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0626 19:59:51.254619   27145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 19:59:51.265559   27145 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558 for IP: 192.168.39.229
	I0626 19:59:51.265588   27145 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.265751   27145 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 19:59:51.265796   27145 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 19:59:51.265839   27145 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key
	I0626 19:59:51.265852   27145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt with IP's: []
	I0626 19:59:51.451286   27145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt ...
	I0626 19:59:51.451320   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt: {Name:mk4ac8e5f9b1f324d860041ddfd3475def022370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.451478   27145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key ...
	I0626 19:59:51.451487   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key: {Name:mk81f782487fe0e6272a3a291f9c72d48acb14f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.451560   27145 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key.24f4b2b2
	I0626 19:59:51.451573   27145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt.24f4b2b2 with IP's: [192.168.39.229 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 19:59:51.501075   27145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt.24f4b2b2 ...
	I0626 19:59:51.501102   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt.24f4b2b2: {Name:mkef8f7e266effdcc61999d02fa69b78fd3c47e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.501241   27145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key.24f4b2b2 ...
	I0626 19:59:51.501251   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key.24f4b2b2: {Name:mk914e425d616ce741819bc854e7ff871db55aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.501325   27145 certs.go:337] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt.24f4b2b2 -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt
	I0626 19:59:51.501408   27145 certs.go:341] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key.24f4b2b2 -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key
	I0626 19:59:51.501456   27145 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key
	I0626 19:59:51.501472   27145 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt with IP's: []
	I0626 19:59:51.665978   27145 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt ...
	I0626 19:59:51.666007   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt: {Name:mk5c39c21c2a5b85a81d862724160739db9e432e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.666203   27145 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key ...
	I0626 19:59:51.666217   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key: {Name:mk2a82f761bdb59e68750e7a699b9f7a86b187d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 19:59:51.666306   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0626 19:59:51.666333   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0626 19:59:51.666353   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0626 19:59:51.666371   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0626 19:59:51.666388   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 19:59:51.666424   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 19:59:51.666436   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 19:59:51.666452   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 19:59:51.666515   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 19:59:51.666555   27145 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 19:59:51.666571   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 19:59:51.666608   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 19:59:51.666641   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 19:59:51.666670   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 19:59:51.666723   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 19:59:51.666759   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem -> /usr/share/ca-certificates/14443.pem
	I0626 19:59:51.666779   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /usr/share/ca-certificates/144432.pem
	I0626 19:59:51.666795   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:59:51.667312   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 19:59:51.695214   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0626 19:59:51.719326   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 19:59:51.742363   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 19:59:51.764508   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 19:59:51.786430   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 19:59:51.809143   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 19:59:51.831651   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 19:59:51.856004   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 19:59:51.879897   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 19:59:51.903479   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 19:59:51.925585   27145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 19:59:51.942048   27145 ssh_runner.go:195] Run: openssl version
	I0626 19:59:51.947316   27145 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0626 19:59:51.947595   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 19:59:51.958287   27145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 19:59:51.962794   27145 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 19:59:51.962819   27145 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 19:59:51.962854   27145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 19:59:51.967954   27145 command_runner.go:130] > 51391683
	I0626 19:59:51.968276   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 19:59:51.978602   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 19:59:51.989337   27145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 19:59:51.993862   27145 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 19:59:51.993887   27145 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 19:59:51.993938   27145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 19:59:51.999487   27145 command_runner.go:130] > 3ec20f2e
	I0626 19:59:51.999549   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 19:59:52.010532   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 19:59:52.021621   27145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:59:52.026252   27145 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:59:52.026282   27145 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:59:52.026328   27145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 19:59:52.031639   27145 command_runner.go:130] > b5213941
	I0626 19:59:52.031951   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 19:59:52.042739   27145 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 19:59:52.046825   27145 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 19:59:52.046901   27145 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 19:59:52.046962   27145 kubeadm.go:404] StartCluster: {Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:59:52.047061   27145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 19:59:52.047126   27145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 19:59:52.081382   27145 cri.go:89] found id: ""
	I0626 19:59:52.081454   27145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 19:59:52.091421   27145 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0626 19:59:52.091449   27145 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0626 19:59:52.091460   27145 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0626 19:59:52.091536   27145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 19:59:52.101191   27145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 19:59:52.110708   27145 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0626 19:59:52.110732   27145 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0626 19:59:52.110743   27145 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0626 19:59:52.110755   27145 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 19:59:52.110785   27145 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 19:59:52.110826   27145 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 19:59:52.224947   27145 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 19:59:52.224979   27145 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0626 19:59:52.225037   27145 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 19:59:52.225053   27145 command_runner.go:130] > [preflight] Running pre-flight checks
	I0626 19:59:52.432527   27145 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 19:59:52.432558   27145 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 19:59:52.432707   27145 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 19:59:52.432727   27145 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 19:59:52.432830   27145 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 19:59:52.432854   27145 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 19:59:52.613996   27145 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 19:59:52.614009   27145 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 19:59:52.955288   27145 out.go:204]   - Generating certificates and keys ...
	I0626 19:59:52.955369   27145 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0626 19:59:52.955379   27145 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 19:59:52.955425   27145 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0626 19:59:52.955488   27145 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 19:59:52.955568   27145 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 19:59:52.955576   27145 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 19:59:53.222538   27145 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 19:59:53.222563   27145 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0626 19:59:53.431460   27145 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 19:59:53.431490   27145 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0626 19:59:53.682697   27145 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 19:59:53.682721   27145 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0626 19:59:54.017831   27145 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 19:59:54.017851   27145 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0626 19:59:54.017997   27145 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-050558] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0626 19:59:54.018029   27145 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-050558] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0626 19:59:54.210163   27145 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 19:59:54.210196   27145 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0626 19:59:54.210508   27145 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-050558] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0626 19:59:54.210520   27145 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-050558] and IPs [192.168.39.229 127.0.0.1 ::1]
	I0626 19:59:54.380515   27145 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 19:59:54.380538   27145 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 19:59:54.704063   27145 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 19:59:54.704106   27145 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 19:59:54.781792   27145 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 19:59:54.781819   27145 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0626 19:59:54.781898   27145 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 19:59:54.781912   27145 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 19:59:54.899079   27145 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 19:59:54.899104   27145 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 19:59:54.983506   27145 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 19:59:54.983531   27145 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 19:59:55.075963   27145 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 19:59:55.075993   27145 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 19:59:55.176209   27145 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 19:59:55.176237   27145 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 19:59:55.191855   27145 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 19:59:55.191877   27145 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 19:59:55.194473   27145 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 19:59:55.194484   27145 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 19:59:55.194541   27145 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 19:59:55.194555   27145 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 19:59:55.317850   27145 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 19:59:55.319969   27145 out.go:204]   - Booting up control plane ...
	I0626 19:59:55.317893   27145 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 19:59:55.320078   27145 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 19:59:55.320091   27145 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 19:59:55.320201   27145 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 19:59:55.320213   27145 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 19:59:55.320285   27145 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 19:59:55.320294   27145 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 19:59:55.320396   27145 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 19:59:55.320414   27145 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 19:59:55.323885   27145 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 19:59:55.323900   27145 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:00:03.325085   27145 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004162 seconds
	I0626 20:00:03.325109   27145 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004162 seconds
	I0626 20:00:03.325240   27145 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:00:03.325249   27145 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:00:03.350735   27145 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:00:03.350787   27145 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:00:03.893477   27145 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:00:03.893510   27145 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:00:03.893680   27145 kubeadm.go:322] [mark-control-plane] Marking the node multinode-050558 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:00:03.893691   27145 command_runner.go:130] > [mark-control-plane] Marking the node multinode-050558 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:00:04.407982   27145 kubeadm.go:322] [bootstrap-token] Using token: jwhfyy.zynq3omhqe9iz1ek
	I0626 20:00:04.409646   27145 out.go:204]   - Configuring RBAC rules ...
	I0626 20:00:04.408062   27145 command_runner.go:130] > [bootstrap-token] Using token: jwhfyy.zynq3omhqe9iz1ek
	I0626 20:00:04.409778   27145 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:00:04.409803   27145 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:00:04.418631   27145 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:00:04.418664   27145 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:00:04.426783   27145 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:00:04.426801   27145 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:00:04.433624   27145 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:00:04.433642   27145 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:00:04.438334   27145 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:00:04.438350   27145 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:00:04.441708   27145 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:00:04.441728   27145 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:00:04.462438   27145 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:00:04.462468   27145 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:00:04.741866   27145 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:00:04.741895   27145 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0626 20:00:04.889545   27145 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:00:04.889588   27145 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0626 20:00:04.890611   27145 kubeadm.go:322] 
	I0626 20:00:04.890689   27145 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:00:04.890705   27145 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0626 20:00:04.890711   27145 kubeadm.go:322] 
	I0626 20:00:04.890801   27145 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:00:04.890814   27145 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0626 20:00:04.890823   27145 kubeadm.go:322] 
	I0626 20:00:04.890852   27145 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:00:04.890863   27145 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0626 20:00:04.890940   27145 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:00:04.890957   27145 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:00:04.891000   27145 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:00:04.891006   27145 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:00:04.891012   27145 kubeadm.go:322] 
	I0626 20:00:04.891097   27145 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:00:04.891113   27145 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0626 20:00:04.891118   27145 kubeadm.go:322] 
	I0626 20:00:04.891178   27145 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:00:04.891194   27145 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:00:04.891197   27145 kubeadm.go:322] 
	I0626 20:00:04.891239   27145 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:00:04.891244   27145 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0626 20:00:04.891303   27145 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:00:04.891309   27145 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:00:04.891363   27145 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:00:04.891378   27145 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:00:04.891392   27145 kubeadm.go:322] 
	I0626 20:00:04.891492   27145 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:00:04.891506   27145 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:00:04.891606   27145 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:00:04.891618   27145 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0626 20:00:04.891624   27145 kubeadm.go:322] 
	I0626 20:00:04.891735   27145 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jwhfyy.zynq3omhqe9iz1ek \
	I0626 20:00:04.891747   27145 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token jwhfyy.zynq3omhqe9iz1ek \
	I0626 20:00:04.891957   27145 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:00:04.891970   27145 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:00:04.891999   27145 kubeadm.go:322] 	--control-plane 
	I0626 20:00:04.892006   27145 command_runner.go:130] > 	--control-plane 
	I0626 20:00:04.892016   27145 kubeadm.go:322] 
	I0626 20:00:04.892122   27145 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:00:04.892133   27145 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:00:04.892138   27145 kubeadm.go:322] 
	I0626 20:00:04.892228   27145 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jwhfyy.zynq3omhqe9iz1ek \
	I0626 20:00:04.892239   27145 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jwhfyy.zynq3omhqe9iz1ek \
	I0626 20:00:04.892321   27145 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:00:04.892327   27145 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:00:04.892747   27145 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:00:04.892769   27145 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:00:04.892793   27145 cni.go:84] Creating CNI manager for ""
	I0626 20:00:04.892822   27145 cni.go:137] 1 nodes found, recommending kindnet
	I0626 20:00:04.894896   27145 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0626 20:00:04.896407   27145 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 20:00:04.903525   27145 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 20:00:04.903541   27145 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0626 20:00:04.903550   27145 command_runner.go:130] > Device: 11h/17d	Inode: 3543        Links: 1
	I0626 20:00:04.903559   27145 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:00:04.903567   27145 command_runner.go:130] > Access: 2023-06-26 19:59:35.511112889 +0000
	I0626 20:00:04.903574   27145 command_runner.go:130] > Modify: 2023-06-22 22:21:30.000000000 +0000
	I0626 20:00:04.903586   27145 command_runner.go:130] > Change: 2023-06-26 19:59:33.743112889 +0000
	I0626 20:00:04.903592   27145 command_runner.go:130] >  Birth: -
	I0626 20:00:04.903632   27145 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 20:00:04.903643   27145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 20:00:04.931307   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 20:00:05.906149   27145 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0626 20:00:05.913281   27145 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0626 20:00:05.922333   27145 command_runner.go:130] > serviceaccount/kindnet created
	I0626 20:00:05.936916   27145 command_runner.go:130] > daemonset.apps/kindnet created
	I0626 20:00:05.939407   27145 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.008064284s)
	I0626 20:00:05.939457   27145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:00:05.939572   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=multinode-050558 minikube.k8s.io/updated_at=2023_06_26T20_00_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:05.939573   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:05.951225   27145 command_runner.go:130] > -16
	I0626 20:00:05.951397   27145 ops.go:34] apiserver oom_adj: -16
	I0626 20:00:06.164608   27145 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0626 20:00:06.166269   27145 command_runner.go:130] > node/multinode-050558 labeled
	I0626 20:00:06.166291   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:06.250979   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:06.751813   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:06.836586   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:07.251162   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:07.347566   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:07.751863   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:07.833424   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:08.251433   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:08.334104   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:08.751769   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:08.836120   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:09.251776   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:09.340417   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:09.752002   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:09.832745   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:10.251909   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:10.337362   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:10.751933   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:10.836997   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:11.251507   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:11.331868   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:11.751442   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:11.835901   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:12.251470   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:12.337458   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:12.752142   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:12.833760   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:13.251379   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:13.335169   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:13.751426   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:13.831401   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:14.251937   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:14.339642   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:14.751878   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:14.839400   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:15.251469   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:15.391253   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:15.751813   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:15.841213   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:16.252047   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:16.353314   27145 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 20:00:16.751181   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:00:16.918304   27145 command_runner.go:130] > NAME      SECRETS   AGE
	I0626 20:00:16.918328   27145 command_runner.go:130] > default   0         0s
	I0626 20:00:16.919666   27145 kubeadm.go:1081] duration metric: took 10.980147407s to wait for elevateKubeSystemPrivileges.
	I0626 20:00:16.919689   27145 kubeadm.go:406] StartCluster complete in 24.872731341s
	I0626 20:00:16.919707   27145 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:00:16.919769   27145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:00:16.920368   27145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:00:16.920579   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:00:16.920595   27145 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:00:16.920666   27145 addons.go:66] Setting default-storageclass=true in profile "multinode-050558"
	I0626 20:00:16.920668   27145 addons.go:66] Setting storage-provisioner=true in profile "multinode-050558"
	I0626 20:00:16.920685   27145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-050558"
	I0626 20:00:16.920686   27145 addons.go:228] Setting addon storage-provisioner=true in "multinode-050558"
	I0626 20:00:16.920788   27145 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:00:16.920830   27145 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:00:16.920858   27145 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:00:16.921121   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:16.921153   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:16.921195   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:16.921251   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:16.921122   27145 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:00:16.922039   27145 cert_rotation.go:137] Starting client certificate rotation controller
	I0626 20:00:16.922266   27145 round_trippers.go:463] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:00:16.922279   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:16.922291   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:16.922305   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:16.941264   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I0626 20:00:16.941437   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41269
	I0626 20:00:16.941774   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:16.941808   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:16.942270   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:16.942286   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:16.942269   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:16.942340   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:16.942632   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:16.942706   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:16.942864   27145 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 20:00:16.943255   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:16.943283   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:16.945099   27145 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:00:16.945441   27145 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:00:16.945888   27145 round_trippers.go:463] GET https://192.168.39.229:8443/apis/storage.k8s.io/v1/storageclasses
	I0626 20:00:16.945908   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:16.945919   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:16.945934   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:16.956655   27145 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0626 20:00:16.956679   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:16.956689   27145 round_trippers.go:580]     Content-Length: 109
	I0626 20:00:16.956697   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:16 GMT
	I0626 20:00:16.956708   27145 round_trippers.go:580]     Audit-Id: 877425fa-342b-4cb3-85cc-34ef81bc8910
	I0626 20:00:16.956718   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:16.956730   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:16.956744   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:16.956758   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:16.956786   27145 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"341"},"items":[]}
	I0626 20:00:16.957108   27145 addons.go:228] Setting addon default-storageclass=true in "multinode-050558"
	I0626 20:00:16.957147   27145 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:00:16.957307   27145 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0626 20:00:16.957326   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:16.957345   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:16.957359   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:16.957394   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:16.957410   27145 round_trippers.go:580]     Content-Length: 291
	I0626 20:00:16.957419   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:16 GMT
	I0626 20:00:16.957432   27145 round_trippers.go:580]     Audit-Id: 3691d934-80a2-4706-8cc2-36e5e786380e
	I0626 20:00:16.957443   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:16.957463   27145 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"341","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 20:00:16.957528   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:16.957556   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:16.957787   27145 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"341","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 20:00:16.957830   27145 round_trippers.go:463] PUT https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:00:16.957835   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:16.957843   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:16.957849   27145 round_trippers.go:473]     Content-Type: application/json
	I0626 20:00:16.957856   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:16.958168   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
	I0626 20:00:16.958595   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:16.959119   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:16.959145   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:16.959556   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:16.959759   27145 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 20:00:16.961433   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:00:16.963555   27145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:00:16.965106   27145 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:00:16.965122   27145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:00:16.965135   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:00:16.968102   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:00:16.968564   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:00:16.968596   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:00:16.968750   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:00:16.968948   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:00:16.969128   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:00:16.969266   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:00:16.974696   27145 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0626 20:00:16.974715   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:16.974722   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:16.974727   27145 round_trippers.go:580]     Content-Length: 291
	I0626 20:00:16.974733   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:16 GMT
	I0626 20:00:16.974738   27145 round_trippers.go:580]     Audit-Id: 518b7cbd-0029-40d2-8826-4556237eca46
	I0626 20:00:16.974743   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:16.974751   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:16.974759   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:16.974785   27145 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"342","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 20:00:16.976105   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0626 20:00:16.976458   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:16.977070   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:16.977098   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:16.977505   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:16.977963   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:16.977998   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:16.992244   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0626 20:00:16.992659   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:16.993188   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:16.993212   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:16.993544   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:16.993734   27145 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 20:00:16.995280   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:00:16.995532   27145 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:00:16.995546   27145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:00:16.995563   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:00:16.998364   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:00:16.998753   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:00:16.998783   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:00:16.998939   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:00:16.999118   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:00:16.999272   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:00:16.999377   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:00:17.139448   27145 command_runner.go:130] > apiVersion: v1
	I0626 20:00:17.139473   27145 command_runner.go:130] > data:
	I0626 20:00:17.139481   27145 command_runner.go:130] >   Corefile: |
	I0626 20:00:17.139487   27145 command_runner.go:130] >     .:53 {
	I0626 20:00:17.139494   27145 command_runner.go:130] >         errors
	I0626 20:00:17.139503   27145 command_runner.go:130] >         health {
	I0626 20:00:17.139511   27145 command_runner.go:130] >            lameduck 5s
	I0626 20:00:17.139518   27145 command_runner.go:130] >         }
	I0626 20:00:17.139526   27145 command_runner.go:130] >         ready
	I0626 20:00:17.139536   27145 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0626 20:00:17.139543   27145 command_runner.go:130] >            pods insecure
	I0626 20:00:17.139559   27145 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0626 20:00:17.139567   27145 command_runner.go:130] >            ttl 30
	I0626 20:00:17.139574   27145 command_runner.go:130] >         }
	I0626 20:00:17.139584   27145 command_runner.go:130] >         prometheus :9153
	I0626 20:00:17.139592   27145 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0626 20:00:17.139600   27145 command_runner.go:130] >            max_concurrent 1000
	I0626 20:00:17.139610   27145 command_runner.go:130] >         }
	I0626 20:00:17.139618   27145 command_runner.go:130] >         cache 30
	I0626 20:00:17.139628   27145 command_runner.go:130] >         loop
	I0626 20:00:17.139636   27145 command_runner.go:130] >         reload
	I0626 20:00:17.139647   27145 command_runner.go:130] >         loadbalance
	I0626 20:00:17.139655   27145 command_runner.go:130] >     }
	I0626 20:00:17.139663   27145 command_runner.go:130] > kind: ConfigMap
	I0626 20:00:17.139674   27145 command_runner.go:130] > metadata:
	I0626 20:00:17.139684   27145 command_runner.go:130] >   creationTimestamp: "2023-06-26T20:00:04Z"
	I0626 20:00:17.139694   27145 command_runner.go:130] >   name: coredns
	I0626 20:00:17.139702   27145 command_runner.go:130] >   namespace: kube-system
	I0626 20:00:17.139714   27145 command_runner.go:130] >   resourceVersion: "234"
	I0626 20:00:17.139724   27145 command_runner.go:130] >   uid: d6a9305d-4072-4b2f-9835-f4e058f49445
	I0626 20:00:17.139854   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:00:17.161424   27145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:00:17.271103   27145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:00:17.475862   27145 round_trippers.go:463] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:00:17.475885   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:17.475893   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:17.475899   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:17.523597   27145 round_trippers.go:574] Response Status: 200 OK in 47 milliseconds
	I0626 20:00:17.523624   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:17.523631   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:17 GMT
	I0626 20:00:17.523637   27145 round_trippers.go:580]     Audit-Id: b07cc747-8947-4063-8c56-753128544a7e
	I0626 20:00:17.523642   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:17.523648   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:17.523653   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:17.523658   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:17.523663   27145 round_trippers.go:580]     Content-Length: 291
	I0626 20:00:17.523681   27145 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"362","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0626 20:00:17.523769   27145 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-050558" context rescaled to 1 replicas
	I0626 20:00:17.523807   27145 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:00:17.525859   27145 out.go:177] * Verifying Kubernetes components...
	I0626 20:00:17.527442   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:00:18.002141   27145 command_runner.go:130] > configmap/coredns replaced
	I0626 20:00:18.004536   27145 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 20:00:18.305193   27145 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0626 20:00:18.305213   27145 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0626 20:00:18.305232   27145 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0626 20:00:18.305239   27145 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0626 20:00:18.305244   27145 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0626 20:00:18.305250   27145 command_runner.go:130] > pod/storage-provisioner created
	I0626 20:00:18.305274   27145 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0626 20:00:18.305308   27145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.034180933s)
	I0626 20:00:18.305314   27145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.143854812s)
	I0626 20:00:18.305344   27145 main.go:141] libmachine: Making call to close driver server
	I0626 20:00:18.305360   27145 main.go:141] libmachine: (multinode-050558) Calling .Close
	I0626 20:00:18.305347   27145 main.go:141] libmachine: Making call to close driver server
	I0626 20:00:18.305396   27145 main.go:141] libmachine: (multinode-050558) Calling .Close
	I0626 20:00:18.305667   27145 main.go:141] libmachine: (multinode-050558) DBG | Closing plugin on server side
	I0626 20:00:18.305693   27145 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:00:18.305701   27145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:00:18.305710   27145 main.go:141] libmachine: Making call to close driver server
	I0626 20:00:18.305717   27145 main.go:141] libmachine: (multinode-050558) Calling .Close
	I0626 20:00:18.305738   27145 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:00:18.305819   27145 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:00:18.305832   27145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:00:18.305840   27145 main.go:141] libmachine: Making call to close driver server
	I0626 20:00:18.305849   27145 main.go:141] libmachine: (multinode-050558) Calling .Close
	I0626 20:00:18.305909   27145 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:00:18.305917   27145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:00:18.306098   27145 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:00:18.306125   27145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:00:18.306046   27145 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:00:18.306143   27145 main.go:141] libmachine: Making call to close driver server
	I0626 20:00:18.306403   27145 main.go:141] libmachine: (multinode-050558) Calling .Close
	I0626 20:00:18.306433   27145 node_ready.go:35] waiting up to 6m0s for node "multinode-050558" to be "Ready" ...
	I0626 20:00:18.306147   27145 main.go:141] libmachine: (multinode-050558) DBG | Closing plugin on server side
	I0626 20:00:18.306728   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:18.306743   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:18.306754   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:18.306762   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:18.308155   27145 main.go:141] libmachine: (multinode-050558) DBG | Closing plugin on server side
	I0626 20:00:18.308158   27145 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:00:18.308185   27145 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:00:18.310093   27145 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0626 20:00:18.311489   27145 addons.go:499] enable addons completed in 1.390893405s: enabled=[storage-provisioner default-storageclass]
	I0626 20:00:18.320582   27145 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0626 20:00:18.320604   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:18.320616   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:18 GMT
	I0626 20:00:18.320626   27145 round_trippers.go:580]     Audit-Id: 7464b8e3-0943-434b-b8bc-ebf907d5739b
	I0626 20:00:18.320638   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:18.320649   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:18.320664   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:18.320675   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:18.320839   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:18.822204   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:18.822226   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:18.822233   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:18.822240   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:18.825004   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:18.825023   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:18.825030   27145 round_trippers.go:580]     Audit-Id: d67d5df4-19e4-463d-98b8-6a2a5ac55486
	I0626 20:00:18.825036   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:18.825041   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:18.825047   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:18.825053   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:18.825061   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:18 GMT
	I0626 20:00:18.825199   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:19.321854   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:19.321878   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:19.321886   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:19.321892   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:19.324974   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:19.324996   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:19.325004   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:19.325010   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:19.325015   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:19.325021   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:19 GMT
	I0626 20:00:19.325030   27145 round_trippers.go:580]     Audit-Id: 9efa4541-4835-4b03-915d-9dcf1dc87138
	I0626 20:00:19.325035   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:19.325498   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:19.822240   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:19.822277   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:19.822288   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:19.822295   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:19.825663   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:19.825684   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:19.825692   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:19.825697   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:19.825703   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:19.825708   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:19 GMT
	I0626 20:00:19.825713   27145 round_trippers.go:580]     Audit-Id: 93b5d077-0704-438c-a2df-a4aab4807ff7
	I0626 20:00:19.825718   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:19.825852   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:20.322596   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:20.322620   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:20.322628   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:20.322634   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:20.325814   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:20.325842   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:20.325849   27145 round_trippers.go:580]     Audit-Id: cf2f74f0-0086-433b-8a56-97ce7b4ac094
	I0626 20:00:20.325855   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:20.325860   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:20.325865   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:20.325871   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:20.325876   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:20 GMT
	I0626 20:00:20.326578   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:20.326864   27145 node_ready.go:58] node "multinode-050558" has status "Ready":"False"
	I0626 20:00:20.822364   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:20.822396   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:20.822408   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:20.822418   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:20.825034   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:20.825061   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:20.825071   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:20.825080   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:20 GMT
	I0626 20:00:20.825088   27145 round_trippers.go:580]     Audit-Id: 92dd73d5-a4a5-4535-b507-bddf7c66d5d6
	I0626 20:00:20.825097   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:20.825106   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:20.825115   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:20.825206   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:21.321774   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:21.321802   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:21.321810   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:21.321816   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:21.324443   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:21.324463   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:21.324473   27145 round_trippers.go:580]     Audit-Id: 06fabdcb-78a6-4c0b-8a6c-bf6a0ca1d571
	I0626 20:00:21.324478   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:21.324484   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:21.324489   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:21.324495   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:21.324503   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:21 GMT
	I0626 20:00:21.324792   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:21.822513   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:21.822536   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:21.822544   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:21.822550   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:21.825658   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:21.825675   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:21.825681   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:21.825686   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:21.825692   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:21.825698   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:21 GMT
	I0626 20:00:21.825706   27145 round_trippers.go:580]     Audit-Id: 28785cef-d1e9-46cd-b2bc-d7515a049c4f
	I0626 20:00:21.825715   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:21.825896   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"322","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0626 20:00:22.322170   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:22.322189   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:22.322198   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:22.322204   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:22.327912   27145 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0626 20:00:22.327940   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:22.327951   27145 round_trippers.go:580]     Audit-Id: ebe89949-7579-4b1e-b15c-b5b61b24ec30
	I0626 20:00:22.327960   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:22.327978   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:22.327986   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:22.327999   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:22.328010   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:22 GMT
	I0626 20:00:22.328159   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:22.328585   27145 node_ready.go:49] node "multinode-050558" has status "Ready":"True"
	I0626 20:00:22.328609   27145 node_ready.go:38] duration metric: took 4.022151937s waiting for node "multinode-050558" to be "Ready" ...
	I0626 20:00:22.328621   27145 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:00:22.328700   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:00:22.328710   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:22.328721   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:22.328735   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:22.338795   27145 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0626 20:00:22.338819   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:22.338830   27145 round_trippers.go:580]     Audit-Id: 099c530e-d74e-4bdf-929c-cc074debf60b
	I0626 20:00:22.338838   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:22.338846   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:22.338854   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:22.338861   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:22.338871   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:22 GMT
	I0626 20:00:22.342134   27145 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"393"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"391","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54819 chars]
	I0626 20:00:22.345009   27145 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:22.345097   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:00:22.345108   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:22.345119   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:22.345129   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:22.348565   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:22.348586   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:22.348595   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:22.348603   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:22.348615   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:22 GMT
	I0626 20:00:22.348623   27145 round_trippers.go:580]     Audit-Id: d6ca9d9a-be7c-4f1e-a9a7-0611d561e513
	I0626 20:00:22.348632   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:22.348641   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:22.348746   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"391","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0626 20:00:22.349277   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:22.349298   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:22.349308   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:22.349317   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:22.352891   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:22.352916   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:22.352928   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:22.352935   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:22.352943   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:22.352949   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:22 GMT
	I0626 20:00:22.352955   27145 round_trippers.go:580]     Audit-Id: 4b0981d4-bce9-4eb1-8ea6-1ed18d8fa4b8
	I0626 20:00:22.352960   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:22.353112   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:22.853985   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:00:22.854014   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:22.854027   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:22.854036   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:22.857291   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:22.857312   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:22.857319   27145 round_trippers.go:580]     Audit-Id: f44e4517-5c53-4546-a5df-db8bf24b493e
	I0626 20:00:22.857325   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:22.857330   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:22.857335   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:22.857341   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:22.857346   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:22 GMT
	I0626 20:00:22.858184   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"391","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0626 20:00:22.858768   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:22.858787   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:22.858798   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:22.858808   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:22.863136   27145 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:00:22.863157   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:22.863166   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:22.863176   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:22.863183   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:22 GMT
	I0626 20:00:22.863192   27145 round_trippers.go:580]     Audit-Id: c49b14ef-aafa-4e7b-b5ff-66558e41d6a5
	I0626 20:00:22.863201   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:22.863214   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:22.863358   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:23.353923   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:00:23.353969   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:23.354002   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:23.354011   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:23.356710   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:23.356737   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:23.356747   27145 round_trippers.go:580]     Audit-Id: 6058818b-f654-4800-9c01-da0f27bd603c
	I0626 20:00:23.356756   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:23.356765   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:23.356773   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:23.356781   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:23.356790   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:23 GMT
	I0626 20:00:23.356911   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"391","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0626 20:00:23.357462   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:23.357476   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:23.357483   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:23.357489   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:23.359687   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:23.359710   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:23.359721   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:23.359729   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:23.359736   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:23.359744   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:23 GMT
	I0626 20:00:23.359752   27145 round_trippers.go:580]     Audit-Id: 0012f383-ff47-4424-bed4-1c022bd21ee7
	I0626 20:00:23.359764   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:23.359892   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:23.854601   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:00:23.854623   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:23.854631   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:23.854638   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:23.857799   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:23.857821   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:23.857832   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:23.857845   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:23.857854   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:23.857861   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:23 GMT
	I0626 20:00:23.857874   27145 round_trippers.go:580]     Audit-Id: 3234d724-6262-42b4-8584-ba0d8d6dfdd3
	I0626 20:00:23.857887   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:23.858037   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"391","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0626 20:00:23.858608   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:23.858627   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:23.858638   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:23.858648   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:23.860884   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:23.860904   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:23.860911   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:23.860917   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:23.860923   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:23.860929   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:23 GMT
	I0626 20:00:23.860936   27145 round_trippers.go:580]     Audit-Id: 4b3caca6-f78c-4d51-a00d-7521742b4066
	I0626 20:00:23.860944   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:23.861477   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:24.353720   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:00:24.353746   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:24.353760   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:24.353767   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:24.357520   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:24.357546   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:24.357553   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:24 GMT
	I0626 20:00:24.357559   27145 round_trippers.go:580]     Audit-Id: d6298c45-9190-40cb-ad34-ba61986df27b
	I0626 20:00:24.357564   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:24.357572   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:24.357580   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:24.357587   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:24.357759   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"404","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0626 20:00:24.358211   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:24.358225   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:24.358235   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:24.358246   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:24.360753   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:24.360768   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:24.360775   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:24.360782   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:24.360788   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:24 GMT
	I0626 20:00:24.360796   27145 round_trippers.go:580]     Audit-Id: 83fed22e-7b97-4a87-90ed-3fb92c648125
	I0626 20:00:24.360804   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:24.360815   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:24.360956   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:24.361240   27145 pod_ready.go:92] pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace has status "Ready":"True"
	I0626 20:00:24.361254   27145 pod_ready.go:81] duration metric: took 2.016222382s waiting for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:24.361262   27145 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:24.361319   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:00:24.361327   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:24.361333   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:24.361339   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:24.363686   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:24.363700   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:24.363707   27145 round_trippers.go:580]     Audit-Id: 51295919-3218-43a4-9b56-cd3e46d54cfe
	I0626 20:00:24.363712   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:24.363720   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:24.363728   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:24.363736   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:24.363750   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:24 GMT
	I0626 20:00:24.363873   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"298","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I0626 20:00:24.364211   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:24.364222   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:24.364229   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:24.364235   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:24.366521   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:24.366536   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:24.366542   27145 round_trippers.go:580]     Audit-Id: 2e83284e-19ed-46f0-9825-82b2855b0f3e
	I0626 20:00:24.366547   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:24.366552   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:24.366557   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:24.366563   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:24.366574   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:24 GMT
	I0626 20:00:24.366750   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:24.867648   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:00:24.867672   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:24.867680   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:24.867686   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:24.870554   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:24.870579   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:24.870591   27145 round_trippers.go:580]     Audit-Id: 12e93789-d920-47e7-812d-0ebc2a9b064f
	I0626 20:00:24.870601   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:24.870610   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:24.870620   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:24.870629   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:24.870643   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:24 GMT
	I0626 20:00:24.870776   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"298","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I0626 20:00:24.871227   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:24.871245   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:24.871253   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:24.871259   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:24.873681   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:24.873698   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:24.873705   27145 round_trippers.go:580]     Audit-Id: 5ae30688-5d32-4a5d-8414-d174f1555216
	I0626 20:00:24.873710   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:24.873716   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:24.873721   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:24.873726   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:24.873731   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:24 GMT
	I0626 20:00:24.874355   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:25.368111   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:00:25.368134   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.368145   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.368152   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.370862   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:25.370885   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.370894   27145 round_trippers.go:580]     Audit-Id: 46b175a1-8a1e-4100-99dc-1f514c2c9d89
	I0626 20:00:25.370902   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.370926   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.370941   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.370949   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.370957   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.371091   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"411","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0626 20:00:25.371571   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:25.371587   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.371600   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.371610   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.374058   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:25.374077   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.374087   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.374096   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.374103   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.374115   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.374123   27145 round_trippers.go:580]     Audit-Id: 62f6b7c2-748a-4193-b8de-e26425b51efc
	I0626 20:00:25.374135   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.374414   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:25.374738   27145 pod_ready.go:92] pod "etcd-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:00:25.374754   27145 pod_ready.go:81] duration metric: took 1.013487042s waiting for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.374765   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.374818   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:00:25.374827   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.374833   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.374839   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.376954   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:25.376978   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.376988   27145 round_trippers.go:580]     Audit-Id: ec79a226-4b63-4d84-b997-0bf37872e874
	I0626 20:00:25.376996   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.377007   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.377016   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.377027   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.377040   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.377155   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"412","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0626 20:00:25.377646   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:25.377665   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.377675   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.377684   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.379549   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:00:25.379568   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.379577   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.379585   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.379594   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.379602   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.379612   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.379623   27145 round_trippers.go:580]     Audit-Id: 88ffbf4b-0241-4d30-ad1b-775bace5b409
	I0626 20:00:25.379789   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:25.380143   27145 pod_ready.go:92] pod "kube-apiserver-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:00:25.380161   27145 pod_ready.go:81] duration metric: took 5.387921ms waiting for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.380172   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.380224   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-050558
	I0626 20:00:25.380235   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.380245   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.380255   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.382178   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:00:25.382198   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.382207   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.382216   27145 round_trippers.go:580]     Audit-Id: ff599096-c4c4-4ed6-bfd9-105baa96f27d
	I0626 20:00:25.382229   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.382238   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.382252   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.382260   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.382563   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-050558","namespace":"kube-system","uid":"d90eb1a6-03bd-4bdf-b50d-9448cef0b578","resourceVersion":"409","creationTimestamp":"2023-06-26T20:00:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.mirror":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.seen":"2023-06-26T20:00:04.802665770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0626 20:00:25.383003   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:25.383018   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.383029   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.383039   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.384609   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:00:25.384627   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.384636   27145 round_trippers.go:580]     Audit-Id: d59f84b3-9f2e-4128-b24d-123125aa39a8
	I0626 20:00:25.384644   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.384655   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.384663   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.384674   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.384685   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.384807   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:25.385152   27145 pod_ready.go:92] pod "kube-controller-manager-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:00:25.385167   27145 pod_ready.go:81] duration metric: took 4.988319ms waiting for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.385181   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.385232   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:00:25.385239   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.385248   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.385263   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.387561   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:25.387580   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.387589   27145 round_trippers.go:580]     Audit-Id: 27c4bda1-2bd9-4648-bb90-4f0beaed2d5b
	I0626 20:00:25.387598   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.387609   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.387621   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.387632   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.387643   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.387935   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-67x99","generateName":"kube-proxy-","namespace":"kube-system","uid":"7ffa817a-1b4a-41a1-9a56-5c65849dc57e","resourceVersion":"377","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 20:00:25.388342   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:25.388356   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.388366   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.388372   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.390369   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:00:25.390388   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.390398   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.390406   27145 round_trippers.go:580]     Audit-Id: 4c394f44-3a65-47a7-9151-0c4c03afccd6
	I0626 20:00:25.390416   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.390427   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.390439   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.390450   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.391059   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:25.391298   27145 pod_ready.go:92] pod "kube-proxy-67x99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:00:25.391311   27145 pod_ready.go:81] duration metric: took 6.119512ms waiting for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.391318   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.391351   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:00:25.391355   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.391361   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.391367   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.393092   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:00:25.393111   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.393121   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.393130   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.393135   27145 round_trippers.go:580]     Audit-Id: cad9a4e0-7f02-4afe-b6c1-412583e97337
	I0626 20:00:25.393141   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.393146   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.393152   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.393356   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-050558","namespace":"kube-system","uid":"1645e687-25f4-49b9-9d11-5f3db01fe7d2","resourceVersion":"410","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.mirror":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.seen":"2023-06-26T19:59:55.756274617Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0626 20:00:25.554158   27145 request.go:628] Waited for 160.355871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:25.554219   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:00:25.554226   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.554237   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.554247   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.556625   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:25.556650   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.556660   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.556669   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.556677   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.556686   27145 round_trippers.go:580]     Audit-Id: 2f2ed78d-c054-44b3-8ecf-63949b5526f5
	I0626 20:00:25.556694   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.556702   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.557595   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:00:25.558031   27145 pod_ready.go:92] pod "kube-scheduler-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:00:25.558048   27145 pod_ready.go:81] duration metric: took 166.72461ms waiting for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:00:25.558057   27145 pod_ready.go:38] duration metric: took 3.22941155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:00:25.558070   27145 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:00:25.558115   27145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:00:25.570675   27145 command_runner.go:130] > 1112
	I0626 20:00:25.570714   27145 api_server.go:72] duration metric: took 8.046881496s to wait for apiserver process to appear ...
	I0626 20:00:25.570724   27145 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:00:25.570747   27145 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:00:25.575765   27145 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0626 20:00:25.575843   27145 round_trippers.go:463] GET https://192.168.39.229:8443/version
	I0626 20:00:25.575868   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.575877   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.575883   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.577607   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:00:25.577623   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.577632   27145 round_trippers.go:580]     Audit-Id: b7e4f048-658b-483f-9a68-acfd6e34eb8f
	I0626 20:00:25.577641   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.577650   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.577660   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.577669   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.577682   27145 round_trippers.go:580]     Content-Length: 263
	I0626 20:00:25.577694   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.577751   27145 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0626 20:00:25.577866   27145 api_server.go:141] control plane version: v1.27.3
	I0626 20:00:25.577887   27145 api_server.go:131] duration metric: took 7.151352ms to wait for apiserver health ...
	I0626 20:00:25.577897   27145 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:00:25.754361   27145 request.go:628] Waited for 176.371854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:00:25.754428   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:00:25.754436   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.754450   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.754458   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.758565   27145 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:00:25.758593   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.758603   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.758612   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.758618   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.758625   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.758632   27145 round_trippers.go:580]     Audit-Id: c64eb161-193f-4a07-a70d-d64c40824667
	I0626 20:00:25.758639   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.759424   27145 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"404","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0626 20:00:25.761194   27145 system_pods.go:59] 8 kube-system pods found
	I0626 20:00:25.761217   27145 system_pods.go:61] "coredns-5d78c9869d-5wffn" [c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5] Running
	I0626 20:00:25.761222   27145 system_pods.go:61] "etcd-multinode-050558" [457d2420-8ece-4b92-8281-7866fa6a884a] Running
	I0626 20:00:25.761226   27145 system_pods.go:61] "kindnet-vjpzs" [695a59a7-ddfd-4f5f-8084-86279daa17b6] Running
	I0626 20:00:25.761230   27145 system_pods.go:61] "kube-apiserver-multinode-050558" [00573436-b505-4be6-a86a-3ba9b74e1ad5] Running
	I0626 20:00:25.761235   27145 system_pods.go:61] "kube-controller-manager-multinode-050558" [d90eb1a6-03bd-4bdf-b50d-9448cef0b578] Running
	I0626 20:00:25.761238   27145 system_pods.go:61] "kube-proxy-67x99" [7ffa817a-1b4a-41a1-9a56-5c65849dc57e] Running
	I0626 20:00:25.761242   27145 system_pods.go:61] "kube-scheduler-multinode-050558" [1645e687-25f4-49b9-9d11-5f3db01fe7d2] Running
	I0626 20:00:25.761247   27145 system_pods.go:61] "storage-provisioner" [fd433ce1-f37e-4168-930f-a93cd00821cb] Running
	I0626 20:00:25.761252   27145 system_pods.go:74] duration metric: took 183.349829ms to wait for pod list to return data ...
	I0626 20:00:25.761260   27145 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:00:25.954682   27145 request.go:628] Waited for 193.363535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/default/serviceaccounts
	I0626 20:00:25.954730   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/default/serviceaccounts
	I0626 20:00:25.954735   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:25.954742   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:25.954759   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:25.957811   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:25.957831   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:25.957841   27145 round_trippers.go:580]     Audit-Id: 56526307-9693-4757-849d-aa0bb4e49fa7
	I0626 20:00:25.957847   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:25.957853   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:25.957858   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:25.957864   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:25.957870   27145 round_trippers.go:580]     Content-Length: 261
	I0626 20:00:25.957875   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:25 GMT
	I0626 20:00:25.957896   27145 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"74e5f487-10bd-4618-86f4-85ee2fa9143f","resourceVersion":"318","creationTimestamp":"2023-06-26T20:00:16Z"}}]}
	I0626 20:00:25.958083   27145 default_sa.go:45] found service account: "default"
	I0626 20:00:25.958097   27145 default_sa.go:55] duration metric: took 196.831893ms for default service account to be created ...
	I0626 20:00:25.958104   27145 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:00:26.154584   27145 request.go:628] Waited for 196.418515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:00:26.154635   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:00:26.154641   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:26.154649   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:26.154655   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:26.158480   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:00:26.158497   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:26.158509   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:26.158518   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:26.158526   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:26.158536   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:26 GMT
	I0626 20:00:26.158544   27145 round_trippers.go:580]     Audit-Id: 17649db8-60b4-4080-831a-5cac1748ca04
	I0626 20:00:26.158552   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:26.159132   27145 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"404","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0626 20:00:26.160720   27145 system_pods.go:86] 8 kube-system pods found
	I0626 20:00:26.160737   27145 system_pods.go:89] "coredns-5d78c9869d-5wffn" [c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5] Running
	I0626 20:00:26.160742   27145 system_pods.go:89] "etcd-multinode-050558" [457d2420-8ece-4b92-8281-7866fa6a884a] Running
	I0626 20:00:26.160746   27145 system_pods.go:89] "kindnet-vjpzs" [695a59a7-ddfd-4f5f-8084-86279daa17b6] Running
	I0626 20:00:26.160750   27145 system_pods.go:89] "kube-apiserver-multinode-050558" [00573436-b505-4be6-a86a-3ba9b74e1ad5] Running
	I0626 20:00:26.160755   27145 system_pods.go:89] "kube-controller-manager-multinode-050558" [d90eb1a6-03bd-4bdf-b50d-9448cef0b578] Running
	I0626 20:00:26.160760   27145 system_pods.go:89] "kube-proxy-67x99" [7ffa817a-1b4a-41a1-9a56-5c65849dc57e] Running
	I0626 20:00:26.160764   27145 system_pods.go:89] "kube-scheduler-multinode-050558" [1645e687-25f4-49b9-9d11-5f3db01fe7d2] Running
	I0626 20:00:26.160772   27145 system_pods.go:89] "storage-provisioner" [fd433ce1-f37e-4168-930f-a93cd00821cb] Running
	I0626 20:00:26.160777   27145 system_pods.go:126] duration metric: took 202.669725ms to wait for k8s-apps to be running ...
	I0626 20:00:26.160784   27145 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:00:26.160822   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:00:26.175122   27145 system_svc.go:56] duration metric: took 14.331741ms WaitForService to wait for kubelet.
	I0626 20:00:26.175141   27145 kubeadm.go:581] duration metric: took 8.651308459s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:00:26.175159   27145 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:00:26.354632   27145 request.go:628] Waited for 179.405253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0626 20:00:26.354701   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes
	I0626 20:00:26.354707   27145 round_trippers.go:469] Request Headers:
	I0626 20:00:26.354714   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:00:26.354722   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:00:26.357427   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:00:26.357452   27145 round_trippers.go:577] Response Headers:
	I0626 20:00:26.357460   27145 round_trippers.go:580]     Audit-Id: 17fba021-fee7-47b4-8437-38293d48e081
	I0626 20:00:26.357466   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:00:26.357472   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:00:26.357477   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:00:26.357482   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:00:26.357487   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:00:26 GMT
	I0626 20:00:26.357812   27145 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0626 20:00:26.358151   27145 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:00:26.358169   27145 node_conditions.go:123] node cpu capacity is 2
	I0626 20:00:26.358181   27145 node_conditions.go:105] duration metric: took 183.017335ms to run NodePressure ...
	I0626 20:00:26.358191   27145 start.go:228] waiting for startup goroutines ...
	I0626 20:00:26.358205   27145 start.go:233] waiting for cluster config update ...
	I0626 20:00:26.358213   27145 start.go:242] writing updated cluster config ...
	I0626 20:00:26.360900   27145 out.go:177] 
	I0626 20:00:26.362746   27145 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:00:26.362819   27145 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:00:26.364849   27145 out.go:177] * Starting worker node multinode-050558-m02 in cluster multinode-050558
	I0626 20:00:26.366402   27145 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:00:26.366429   27145 cache.go:57] Caching tarball of preloaded images
	I0626 20:00:26.366525   27145 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:00:26.366537   27145 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:00:26.366628   27145 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:00:26.366789   27145 start.go:365] acquiring machines lock for multinode-050558-m02: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:00:26.366843   27145 start.go:369] acquired machines lock for "multinode-050558-m02" in 25.296µs
	I0626 20:00:26.366864   27145 start.go:93] Provisioning new machine with config: &{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:00:26.366937   27145 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0626 20:00:26.368885   27145 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0626 20:00:26.368960   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:26.368992   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:26.383622   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0626 20:00:26.384020   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:26.384485   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:26.384516   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:26.384869   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:26.385016   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:00:26.385276   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:26.385437   27145 start.go:159] libmachine.API.Create for "multinode-050558" (driver="kvm2")
	I0626 20:00:26.385469   27145 client.go:168] LocalClient.Create starting
	I0626 20:00:26.385502   27145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem
	I0626 20:00:26.385541   27145 main.go:141] libmachine: Decoding PEM data...
	I0626 20:00:26.385558   27145 main.go:141] libmachine: Parsing certificate...
	I0626 20:00:26.385606   27145 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem
	I0626 20:00:26.385627   27145 main.go:141] libmachine: Decoding PEM data...
	I0626 20:00:26.385636   27145 main.go:141] libmachine: Parsing certificate...
	I0626 20:00:26.385659   27145 main.go:141] libmachine: Running pre-create checks...
	I0626 20:00:26.385666   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .PreCreateCheck
	I0626 20:00:26.385856   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetConfigRaw
	I0626 20:00:26.386236   27145 main.go:141] libmachine: Creating machine...
	I0626 20:00:26.386250   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .Create
	I0626 20:00:26.386390   27145 main.go:141] libmachine: (multinode-050558-m02) Creating KVM machine...
	I0626 20:00:26.387799   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found existing default KVM network
	I0626 20:00:26.387938   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found existing private KVM network mk-multinode-050558
	I0626 20:00:26.388073   27145 main.go:141] libmachine: (multinode-050558-m02) Setting up store path in /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02 ...
	I0626 20:00:26.388105   27145 main.go:141] libmachine: (multinode-050558-m02) Building disk image from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 20:00:26.388132   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:26.388031   27507 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:00:26.388243   27145 main.go:141] libmachine: (multinode-050558-m02) Downloading /home/jenkins/minikube-integration/16761-7242/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso...
	I0626 20:00:26.576058   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:26.575944   27507 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa...
	I0626 20:00:26.675412   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:26.675273   27507 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/multinode-050558-m02.rawdisk...
	I0626 20:00:26.675441   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Writing magic tar header
	I0626 20:00:26.675457   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Writing SSH key tar header
	I0626 20:00:26.675465   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:26.675383   27507 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02 ...
	I0626 20:00:26.675476   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02
	I0626 20:00:26.675570   27145 main.go:141] libmachine: (multinode-050558-m02) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02 (perms=drwx------)
	I0626 20:00:26.675611   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines
	I0626 20:00:26.675622   27145 main.go:141] libmachine: (multinode-050558-m02) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines (perms=drwxr-xr-x)
	I0626 20:00:26.675658   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:00:26.675695   27145 main.go:141] libmachine: (multinode-050558-m02) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube (perms=drwxr-xr-x)
	I0626 20:00:26.675710   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242
	I0626 20:00:26.675729   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0626 20:00:26.675742   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home/jenkins
	I0626 20:00:26.675755   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Checking permissions on dir: /home
	I0626 20:00:26.675766   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Skipping /home - not owner
	I0626 20:00:26.675786   27145 main.go:141] libmachine: (multinode-050558-m02) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242 (perms=drwxrwxr-x)
	I0626 20:00:26.675803   27145 main.go:141] libmachine: (multinode-050558-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0626 20:00:26.675815   27145 main.go:141] libmachine: (multinode-050558-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0626 20:00:26.675833   27145 main.go:141] libmachine: (multinode-050558-m02) Creating domain...
	I0626 20:00:26.676557   27145 main.go:141] libmachine: (multinode-050558-m02) define libvirt domain using xml: 
	I0626 20:00:26.676582   27145 main.go:141] libmachine: (multinode-050558-m02) <domain type='kvm'>
	I0626 20:00:26.676594   27145 main.go:141] libmachine: (multinode-050558-m02)   <name>multinode-050558-m02</name>
	I0626 20:00:26.676602   27145 main.go:141] libmachine: (multinode-050558-m02)   <memory unit='MiB'>2200</memory>
	I0626 20:00:26.676609   27145 main.go:141] libmachine: (multinode-050558-m02)   <vcpu>2</vcpu>
	I0626 20:00:26.676618   27145 main.go:141] libmachine: (multinode-050558-m02)   <features>
	I0626 20:00:26.676631   27145 main.go:141] libmachine: (multinode-050558-m02)     <acpi/>
	I0626 20:00:26.676644   27145 main.go:141] libmachine: (multinode-050558-m02)     <apic/>
	I0626 20:00:26.676655   27145 main.go:141] libmachine: (multinode-050558-m02)     <pae/>
	I0626 20:00:26.676670   27145 main.go:141] libmachine: (multinode-050558-m02)     
	I0626 20:00:26.676698   27145 main.go:141] libmachine: (multinode-050558-m02)   </features>
	I0626 20:00:26.676727   27145 main.go:141] libmachine: (multinode-050558-m02)   <cpu mode='host-passthrough'>
	I0626 20:00:26.676740   27145 main.go:141] libmachine: (multinode-050558-m02)   
	I0626 20:00:26.676754   27145 main.go:141] libmachine: (multinode-050558-m02)   </cpu>
	I0626 20:00:26.676770   27145 main.go:141] libmachine: (multinode-050558-m02)   <os>
	I0626 20:00:26.676783   27145 main.go:141] libmachine: (multinode-050558-m02)     <type>hvm</type>
	I0626 20:00:26.676802   27145 main.go:141] libmachine: (multinode-050558-m02)     <boot dev='cdrom'/>
	I0626 20:00:26.676819   27145 main.go:141] libmachine: (multinode-050558-m02)     <boot dev='hd'/>
	I0626 20:00:26.676833   27145 main.go:141] libmachine: (multinode-050558-m02)     <bootmenu enable='no'/>
	I0626 20:00:26.676842   27145 main.go:141] libmachine: (multinode-050558-m02)   </os>
	I0626 20:00:26.676850   27145 main.go:141] libmachine: (multinode-050558-m02)   <devices>
	I0626 20:00:26.676858   27145 main.go:141] libmachine: (multinode-050558-m02)     <disk type='file' device='cdrom'>
	I0626 20:00:26.676869   27145 main.go:141] libmachine: (multinode-050558-m02)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/boot2docker.iso'/>
	I0626 20:00:26.676881   27145 main.go:141] libmachine: (multinode-050558-m02)       <target dev='hdc' bus='scsi'/>
	I0626 20:00:26.676890   27145 main.go:141] libmachine: (multinode-050558-m02)       <readonly/>
	I0626 20:00:26.676898   27145 main.go:141] libmachine: (multinode-050558-m02)     </disk>
	I0626 20:00:26.676913   27145 main.go:141] libmachine: (multinode-050558-m02)     <disk type='file' device='disk'>
	I0626 20:00:26.676926   27145 main.go:141] libmachine: (multinode-050558-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0626 20:00:26.676937   27145 main.go:141] libmachine: (multinode-050558-m02)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/multinode-050558-m02.rawdisk'/>
	I0626 20:00:26.676946   27145 main.go:141] libmachine: (multinode-050558-m02)       <target dev='hda' bus='virtio'/>
	I0626 20:00:26.676952   27145 main.go:141] libmachine: (multinode-050558-m02)     </disk>
	I0626 20:00:26.676959   27145 main.go:141] libmachine: (multinode-050558-m02)     <interface type='network'>
	I0626 20:00:26.676969   27145 main.go:141] libmachine: (multinode-050558-m02)       <source network='mk-multinode-050558'/>
	I0626 20:00:26.676977   27145 main.go:141] libmachine: (multinode-050558-m02)       <model type='virtio'/>
	I0626 20:00:26.676983   27145 main.go:141] libmachine: (multinode-050558-m02)     </interface>
	I0626 20:00:26.676991   27145 main.go:141] libmachine: (multinode-050558-m02)     <interface type='network'>
	I0626 20:00:26.676997   27145 main.go:141] libmachine: (multinode-050558-m02)       <source network='default'/>
	I0626 20:00:26.677008   27145 main.go:141] libmachine: (multinode-050558-m02)       <model type='virtio'/>
	I0626 20:00:26.677016   27145 main.go:141] libmachine: (multinode-050558-m02)     </interface>
	I0626 20:00:26.677024   27145 main.go:141] libmachine: (multinode-050558-m02)     <serial type='pty'>
	I0626 20:00:26.677037   27145 main.go:141] libmachine: (multinode-050558-m02)       <target port='0'/>
	I0626 20:00:26.677048   27145 main.go:141] libmachine: (multinode-050558-m02)     </serial>
	I0626 20:00:26.677062   27145 main.go:141] libmachine: (multinode-050558-m02)     <console type='pty'>
	I0626 20:00:26.677074   27145 main.go:141] libmachine: (multinode-050558-m02)       <target type='serial' port='0'/>
	I0626 20:00:26.677086   27145 main.go:141] libmachine: (multinode-050558-m02)     </console>
	I0626 20:00:26.677105   27145 main.go:141] libmachine: (multinode-050558-m02)     <rng model='virtio'>
	I0626 20:00:26.677119   27145 main.go:141] libmachine: (multinode-050558-m02)       <backend model='random'>/dev/random</backend>
	I0626 20:00:26.677131   27145 main.go:141] libmachine: (multinode-050558-m02)     </rng>
	I0626 20:00:26.677159   27145 main.go:141] libmachine: (multinode-050558-m02)     
	I0626 20:00:26.677190   27145 main.go:141] libmachine: (multinode-050558-m02)     
	I0626 20:00:26.677203   27145 main.go:141] libmachine: (multinode-050558-m02)   </devices>
	I0626 20:00:26.677221   27145 main.go:141] libmachine: (multinode-050558-m02) </domain>
	I0626 20:00:26.677237   27145 main.go:141] libmachine: (multinode-050558-m02) 
	I0626 20:00:26.684118   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:1e:b8:66 in network default
	I0626 20:00:26.684718   27145 main.go:141] libmachine: (multinode-050558-m02) Ensuring networks are active...
	I0626 20:00:26.684742   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:26.685612   27145 main.go:141] libmachine: (multinode-050558-m02) Ensuring network default is active
	I0626 20:00:26.685988   27145 main.go:141] libmachine: (multinode-050558-m02) Ensuring network mk-multinode-050558 is active
	I0626 20:00:26.686457   27145 main.go:141] libmachine: (multinode-050558-m02) Getting domain xml...
	I0626 20:00:26.687101   27145 main.go:141] libmachine: (multinode-050558-m02) Creating domain...
	I0626 20:00:27.942649   27145 main.go:141] libmachine: (multinode-050558-m02) Waiting to get IP...
	I0626 20:00:27.943516   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:27.943956   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:27.943986   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:27.943909   27507 retry.go:31] will retry after 273.562665ms: waiting for machine to come up
	I0626 20:00:28.219460   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:28.219963   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:28.219999   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:28.219940   27507 retry.go:31] will retry after 337.473258ms: waiting for machine to come up
	I0626 20:00:28.559532   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:28.559961   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:28.559990   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:28.559912   27507 retry.go:31] will retry after 319.739704ms: waiting for machine to come up
	I0626 20:00:28.881510   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:28.881927   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:28.881954   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:28.881888   27507 retry.go:31] will retry after 488.727066ms: waiting for machine to come up
	I0626 20:00:29.372551   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:29.373036   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:29.373066   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:29.372982   27507 retry.go:31] will retry after 653.940169ms: waiting for machine to come up
	I0626 20:00:30.028862   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:30.029363   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:30.029418   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:30.029315   27507 retry.go:31] will retry after 870.685515ms: waiting for machine to come up
	I0626 20:00:30.902019   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:30.902498   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:30.902520   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:30.902462   27507 retry.go:31] will retry after 798.836571ms: waiting for machine to come up
	I0626 20:00:31.702622   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:31.703105   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:31.703137   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:31.703048   27507 retry.go:31] will retry after 1.055048257s: waiting for machine to come up
	I0626 20:00:32.759634   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:32.760070   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:32.760121   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:32.760037   27507 retry.go:31] will retry after 1.174127248s: waiting for machine to come up
	I0626 20:00:33.936475   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:33.937019   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:33.937048   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:33.936960   27507 retry.go:31] will retry after 1.836923498s: waiting for machine to come up
	I0626 20:00:35.775694   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:35.776120   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:35.776142   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:35.776080   27507 retry.go:31] will retry after 1.939449974s: waiting for machine to come up
	I0626 20:00:37.717753   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:37.718291   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:37.718328   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:37.718229   27507 retry.go:31] will retry after 2.320761145s: waiting for machine to come up
	I0626 20:00:40.040868   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:40.041409   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:40.041438   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:40.041352   27507 retry.go:31] will retry after 4.212215023s: waiting for machine to come up
	I0626 20:00:44.255967   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:44.256392   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find current IP address of domain multinode-050558-m02 in network mk-multinode-050558
	I0626 20:00:44.256412   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | I0626 20:00:44.256363   27507 retry.go:31] will retry after 4.732573107s: waiting for machine to come up
	I0626 20:00:48.991236   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:48.991721   27145 main.go:141] libmachine: (multinode-050558-m02) Found IP for machine: 192.168.39.133
	I0626 20:00:48.991753   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has current primary IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:48.991766   27145 main.go:141] libmachine: (multinode-050558-m02) Reserving static IP address...
	I0626 20:00:48.992139   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | unable to find host DHCP lease matching {name: "multinode-050558-m02", mac: "52:54:00:86:03:c9", ip: "192.168.39.133"} in network mk-multinode-050558
	I0626 20:00:49.065609   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Getting to WaitForSSH function...
	I0626 20:00:49.065644   27145 main.go:141] libmachine: (multinode-050558-m02) Reserved static IP address: 192.168.39.133
	I0626 20:00:49.065659   27145 main.go:141] libmachine: (multinode-050558-m02) Waiting for SSH to be available...
	I0626 20:00:49.068240   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.068694   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.068728   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.068838   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Using SSH client type: external
	I0626 20:00:49.068864   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa (-rw-------)
	I0626 20:00:49.068896   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:00:49.068912   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | About to run SSH command:
	I0626 20:00:49.068926   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | exit 0
	I0626 20:00:49.169150   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | SSH cmd err, output: <nil>: 
	I0626 20:00:49.169408   27145 main.go:141] libmachine: (multinode-050558-m02) KVM machine creation complete!
	I0626 20:00:49.169782   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetConfigRaw
	I0626 20:00:49.170285   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:49.170485   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:49.170670   27145 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0626 20:00:49.170691   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetState
	I0626 20:00:49.171964   27145 main.go:141] libmachine: Detecting operating system of created instance...
	I0626 20:00:49.171982   27145 main.go:141] libmachine: Waiting for SSH to be available...
	I0626 20:00:49.171991   27145 main.go:141] libmachine: Getting to WaitForSSH function...
	I0626 20:00:49.172006   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:49.175088   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.175660   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.175691   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.175875   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:49.176093   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.176286   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.176407   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:49.176613   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 20:00:49.177107   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:00:49.177123   27145 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0626 20:00:49.308634   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:00:49.308657   27145 main.go:141] libmachine: Detecting the provisioner...
	I0626 20:00:49.308669   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:49.311388   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.311797   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.311827   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.311965   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:49.312136   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.312254   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.312390   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:49.312549   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 20:00:49.312922   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:00:49.312933   27145 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0626 20:00:49.445996   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2e95ab-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0626 20:00:49.446084   27145 main.go:141] libmachine: found compatible host: buildroot
	I0626 20:00:49.446092   27145 main.go:141] libmachine: Provisioning with buildroot...
	I0626 20:00:49.446108   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:00:49.446389   27145 buildroot.go:166] provisioning hostname "multinode-050558-m02"
	I0626 20:00:49.446419   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:00:49.446614   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:49.449689   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.450071   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.450101   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.450297   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:49.450519   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.450679   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.450815   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:49.450959   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 20:00:49.451395   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:00:49.451411   27145 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-050558-m02 && echo "multinode-050558-m02" | sudo tee /etc/hostname
	I0626 20:00:49.598323   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-050558-m02
	
	I0626 20:00:49.598348   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:49.600928   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.601276   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.601306   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.601562   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:49.601736   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.601888   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.602073   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:49.602253   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 20:00:49.602818   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:00:49.602845   27145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-050558-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-050558-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-050558-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:00:49.742341   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:00:49.742368   27145 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:00:49.742389   27145 buildroot.go:174] setting up certificates
	I0626 20:00:49.742399   27145 provision.go:83] configureAuth start
	I0626 20:00:49.742411   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:00:49.742656   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:00:49.745015   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.745361   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.745417   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.745545   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:49.747553   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.747904   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.747937   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.748056   27145 provision.go:138] copyHostCerts
	I0626 20:00:49.748093   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:00:49.748121   27145 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:00:49.748139   27145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:00:49.748211   27145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:00:49.748301   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:00:49.748325   27145 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:00:49.748334   27145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:00:49.748369   27145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:00:49.748424   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:00:49.748446   27145 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:00:49.748455   27145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:00:49.748487   27145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:00:49.748545   27145 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.multinode-050558-m02 san=[192.168.39.133 192.168.39.133 localhost 127.0.0.1 minikube multinode-050558-m02]
	I0626 20:00:49.965215   27145 provision.go:172] copyRemoteCerts
	I0626 20:00:49.965268   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:00:49.965288   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:49.967856   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.968168   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:49.968192   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:49.968343   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:49.968544   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:49.968708   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:49.968835   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:00:50.066370   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 20:00:50.066438   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:00:50.090521   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 20:00:50.090602   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0626 20:00:50.115058   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 20:00:50.115140   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:00:50.138775   27145 provision.go:86] duration metric: configureAuth took 396.354799ms
	I0626 20:00:50.138799   27145 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:00:50.138983   27145 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:00:50.139066   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:50.141887   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.142249   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.142298   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.142392   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:50.142605   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.142779   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.142950   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:50.143123   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 20:00:50.143687   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:00:50.143718   27145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:00:50.465316   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:00:50.465342   27145 main.go:141] libmachine: Checking connection to Docker...
	I0626 20:00:50.465357   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetURL
	I0626 20:00:50.466670   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | Using libvirt version 6000000
	I0626 20:00:50.469248   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.469585   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.469610   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.469815   27145 main.go:141] libmachine: Docker is up and running!
	I0626 20:00:50.469832   27145 main.go:141] libmachine: Reticulating splines...
	I0626 20:00:50.469839   27145 client.go:171] LocalClient.Create took 24.084363083s
	I0626 20:00:50.469866   27145 start.go:167] duration metric: libmachine.API.Create for "multinode-050558" took 24.084428485s
	I0626 20:00:50.469877   27145 start.go:300] post-start starting for "multinode-050558-m02" (driver="kvm2")
	I0626 20:00:50.469890   27145 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:00:50.469913   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:50.470145   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:00:50.470169   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:50.472469   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.472836   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.472863   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.473030   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:50.473223   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.473368   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:50.473534   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:00:50.572887   27145 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:00:50.577310   27145 command_runner.go:130] > NAME=Buildroot
	I0626 20:00:50.577324   27145 command_runner.go:130] > VERSION=2021.02.12-1-ge2e95ab-dirty
	I0626 20:00:50.577328   27145 command_runner.go:130] > ID=buildroot
	I0626 20:00:50.577334   27145 command_runner.go:130] > VERSION_ID=2021.02.12
	I0626 20:00:50.577338   27145 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0626 20:00:50.577361   27145 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:00:50.577379   27145 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:00:50.577436   27145 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:00:50.577510   27145 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:00:50.577526   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /etc/ssl/certs/144432.pem
	I0626 20:00:50.577598   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:00:50.588797   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:00:50.612542   27145 start.go:303] post-start completed in 142.651162ms
	I0626 20:00:50.612583   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetConfigRaw
	I0626 20:00:50.613106   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:00:50.615554   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.615933   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.615964   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.616255   27145 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:00:50.616488   27145 start.go:128] duration metric: createHost completed in 24.249542949s
	I0626 20:00:50.616517   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:50.618865   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.619223   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.619254   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.619372   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:50.619550   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.619693   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.619826   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:50.619986   27145 main.go:141] libmachine: Using SSH client type: native
	I0626 20:00:50.620379   27145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:00:50.620391   27145 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:00:50.754195   27145 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687809650.723617497
	
	I0626 20:00:50.754221   27145 fix.go:206] guest clock: 1687809650.723617497
	I0626 20:00:50.754229   27145 fix.go:219] Guest: 2023-06-26 20:00:50.723617497 +0000 UTC Remote: 2023-06-26 20:00:50.616502451 +0000 UTC m=+88.077068697 (delta=107.115046ms)
	I0626 20:00:50.754243   27145 fix.go:190] guest clock delta is within tolerance: 107.115046ms
	I0626 20:00:50.754247   27145 start.go:83] releasing machines lock for "multinode-050558-m02", held for 24.387393961s
	I0626 20:00:50.754265   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:50.754524   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:00:50.757277   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.757678   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.757711   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.760336   27145 out.go:177] * Found network options:
	I0626 20:00:50.762073   27145 out.go:177]   - NO_PROXY=192.168.39.229
	W0626 20:00:50.763639   27145 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 20:00:50.763673   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:50.764248   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:50.764491   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:00:50.764606   27145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:00:50.764643   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	W0626 20:00:50.764710   27145 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 20:00:50.764795   27145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:00:50.764817   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:00:50.767485   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.767762   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.767896   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.767920   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.768066   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:50.768155   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:50.768182   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:50.768322   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.768391   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:00:50.768483   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:50.768564   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:00:50.768625   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:00:50.768699   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:00:50.768814   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:00:50.883660   27145 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 20:00:51.020165   27145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 20:00:51.027555   27145 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0626 20:00:51.027740   27145 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:00:51.027826   27145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:00:51.044176   27145 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0626 20:00:51.044476   27145 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:00:51.044499   27145 start.go:466] detecting cgroup driver to use...
	I0626 20:00:51.044571   27145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:00:51.061915   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:00:51.075075   27145 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:00:51.075146   27145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:00:51.087437   27145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:00:51.099821   27145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:00:51.211170   27145 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0626 20:00:51.211321   27145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:00:51.225705   27145 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0626 20:00:51.326426   27145 docker.go:212] disabling docker service ...
	I0626 20:00:51.326505   27145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:00:51.340211   27145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:00:51.352581   27145 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0626 20:00:51.352832   27145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:00:51.463910   27145 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0626 20:00:51.463986   27145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:00:51.477293   27145 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0626 20:00:51.477817   27145 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0626 20:00:51.568750   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:00:51.581179   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:00:51.598358   27145 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 20:00:51.598913   27145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:00:51.598972   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:00:51.608282   27145 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:00:51.608342   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:00:51.617524   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:00:51.626913   27145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:00:51.636509   27145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:00:51.646054   27145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:00:51.654305   27145 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:00:51.654524   27145 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:00:51.654582   27145 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:00:51.667676   27145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:00:51.676188   27145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:00:51.780864   27145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:00:51.959184   27145 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:00:51.959259   27145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:00:51.964412   27145 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 20:00:51.964435   27145 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 20:00:51.964445   27145 command_runner.go:130] > Device: 16h/22d	Inode: 744         Links: 1
	I0626 20:00:51.964454   27145 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:00:51.964461   27145 command_runner.go:130] > Access: 2023-06-26 20:00:51.913378292 +0000
	I0626 20:00:51.964469   27145 command_runner.go:130] > Modify: 2023-06-26 20:00:51.913378292 +0000
	I0626 20:00:51.964477   27145 command_runner.go:130] > Change: 2023-06-26 20:00:51.913378292 +0000
	I0626 20:00:51.964482   27145 command_runner.go:130] >  Birth: -
	I0626 20:00:51.964977   27145 start.go:534] Will wait 60s for crictl version
	I0626 20:00:51.965037   27145 ssh_runner.go:195] Run: which crictl
	I0626 20:00:51.968937   27145 command_runner.go:130] > /usr/bin/crictl
	I0626 20:00:51.969095   27145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:00:51.997804   27145 command_runner.go:130] > Version:  0.1.0
	I0626 20:00:51.997830   27145 command_runner.go:130] > RuntimeName:  cri-o
	I0626 20:00:51.997844   27145 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0626 20:00:51.997853   27145 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0626 20:00:51.997873   27145 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:00:51.997929   27145 ssh_runner.go:195] Run: crio --version
	I0626 20:00:52.058423   27145 command_runner.go:130] > crio version 1.24.1
	I0626 20:00:52.058447   27145 command_runner.go:130] > Version:          1.24.1
	I0626 20:00:52.058458   27145 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:00:52.058465   27145 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:00:52.058475   27145 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:00:52.058483   27145 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:00:52.058488   27145 command_runner.go:130] > Compiler:         gc
	I0626 20:00:52.058492   27145 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:00:52.058511   27145 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:00:52.058521   27145 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:00:52.058525   27145 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:00:52.058529   27145 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:00:52.058597   27145 ssh_runner.go:195] Run: crio --version
	I0626 20:00:52.107590   27145 command_runner.go:130] > crio version 1.24.1
	I0626 20:00:52.107611   27145 command_runner.go:130] > Version:          1.24.1
	I0626 20:00:52.107622   27145 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:00:52.107629   27145 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:00:52.107637   27145 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:00:52.107645   27145 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:00:52.107651   27145 command_runner.go:130] > Compiler:         gc
	I0626 20:00:52.107658   27145 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:00:52.107667   27145 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:00:52.107679   27145 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:00:52.107686   27145 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:00:52.107700   27145 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:00:52.109980   27145 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:00:52.111785   27145 out.go:177]   - env NO_PROXY=192.168.39.229
	I0626 20:00:52.113413   27145 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:00:52.115800   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:52.116079   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:00:52.116113   27145 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:00:52.116287   27145 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:00:52.120526   27145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:00:52.133417   27145 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558 for IP: 192.168.39.133
	I0626 20:00:52.133446   27145 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:00:52.133607   27145 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:00:52.133662   27145 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:00:52.133680   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 20:00:52.133701   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 20:00:52.133718   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 20:00:52.133735   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 20:00:52.133797   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:00:52.133836   27145 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:00:52.133853   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:00:52.133887   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:00:52.133916   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:00:52.133949   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:00:52.134004   27145 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:00:52.134043   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /usr/share/ca-certificates/144432.pem
	I0626 20:00:52.134061   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:00:52.134075   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem -> /usr/share/ca-certificates/14443.pem
	I0626 20:00:52.134536   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:00:52.159222   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:00:52.183906   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:00:52.207627   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:00:52.230445   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:00:52.254389   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:00:52.277574   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:00:52.300467   27145 ssh_runner.go:195] Run: openssl version
	I0626 20:00:52.305955   27145 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0626 20:00:52.306039   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:00:52.315640   27145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:00:52.320175   27145 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:00:52.320277   27145 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:00:52.320317   27145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:00:52.325494   27145 command_runner.go:130] > b5213941
	I0626 20:00:52.325719   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:00:52.335718   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:00:52.347006   27145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:00:52.351708   27145 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:00:52.351787   27145 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:00:52.351855   27145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:00:52.357343   27145 command_runner.go:130] > 51391683
	I0626 20:00:52.357622   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:00:52.367367   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:00:52.377011   27145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:00:52.381506   27145 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:00:52.381572   27145 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:00:52.381618   27145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:00:52.387016   27145 command_runner.go:130] > 3ec20f2e
	I0626 20:00:52.387085   27145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:00:52.396541   27145 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:00:52.400453   27145 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 20:00:52.400568   27145 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 20:00:52.400656   27145 ssh_runner.go:195] Run: crio config
	I0626 20:00:52.452163   27145 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 20:00:52.452190   27145 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 20:00:52.452201   27145 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 20:00:52.452205   27145 command_runner.go:130] > #
	I0626 20:00:52.452228   27145 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 20:00:52.452239   27145 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 20:00:52.452247   27145 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 20:00:52.452254   27145 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 20:00:52.452257   27145 command_runner.go:130] > # reload'.
	I0626 20:00:52.452263   27145 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 20:00:52.452270   27145 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 20:00:52.452279   27145 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 20:00:52.452287   27145 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 20:00:52.452291   27145 command_runner.go:130] > [crio]
	I0626 20:00:52.452297   27145 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 20:00:52.452302   27145 command_runner.go:130] > # containers images, in this directory.
	I0626 20:00:52.452321   27145 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0626 20:00:52.452332   27145 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 20:00:52.452337   27145 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0626 20:00:52.452347   27145 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 20:00:52.452357   27145 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 20:00:52.452398   27145 command_runner.go:130] > storage_driver = "overlay"
	I0626 20:00:52.452414   27145 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 20:00:52.452423   27145 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 20:00:52.452434   27145 command_runner.go:130] > storage_option = [
	I0626 20:00:52.452444   27145 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0626 20:00:52.452653   27145 command_runner.go:130] > ]
	I0626 20:00:52.452672   27145 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 20:00:52.452681   27145 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 20:00:52.453128   27145 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 20:00:52.453164   27145 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 20:00:52.453175   27145 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 20:00:52.453180   27145 command_runner.go:130] > # always happen on a node reboot
	I0626 20:00:52.453410   27145 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 20:00:52.453429   27145 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 20:00:52.453439   27145 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 20:00:52.453467   27145 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 20:00:52.454040   27145 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 20:00:52.454061   27145 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 20:00:52.454075   27145 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 20:00:52.454717   27145 command_runner.go:130] > # internal_wipe = true
	I0626 20:00:52.454736   27145 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 20:00:52.454746   27145 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 20:00:52.454755   27145 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 20:00:52.455016   27145 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 20:00:52.455032   27145 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 20:00:52.455039   27145 command_runner.go:130] > [crio.api]
	I0626 20:00:52.455048   27145 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 20:00:52.455402   27145 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 20:00:52.455424   27145 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 20:00:52.456060   27145 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 20:00:52.456081   27145 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 20:00:52.456090   27145 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 20:00:52.456392   27145 command_runner.go:130] > # stream_port = "0"
	I0626 20:00:52.456414   27145 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 20:00:52.456838   27145 command_runner.go:130] > # stream_enable_tls = false
	I0626 20:00:52.456854   27145 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 20:00:52.457113   27145 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 20:00:52.457133   27145 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 20:00:52.457143   27145 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 20:00:52.457149   27145 command_runner.go:130] > # minutes.
	I0626 20:00:52.457483   27145 command_runner.go:130] > # stream_tls_cert = ""
	I0626 20:00:52.457495   27145 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 20:00:52.457501   27145 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 20:00:52.457984   27145 command_runner.go:130] > # stream_tls_key = ""
	I0626 20:00:52.458000   27145 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 20:00:52.458009   27145 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 20:00:52.458018   27145 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 20:00:52.458351   27145 command_runner.go:130] > # stream_tls_ca = ""
	I0626 20:00:52.458366   27145 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:00:52.460013   27145 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0626 20:00:52.460027   27145 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:00:52.460034   27145 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0626 20:00:52.460062   27145 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 20:00:52.460081   27145 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 20:00:52.460085   27145 command_runner.go:130] > [crio.runtime]
	I0626 20:00:52.460094   27145 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 20:00:52.460101   27145 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 20:00:52.460105   27145 command_runner.go:130] > # "nofile=1024:2048"
	I0626 20:00:52.460113   27145 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 20:00:52.460120   27145 command_runner.go:130] > # default_ulimits = [
	I0626 20:00:52.460123   27145 command_runner.go:130] > # ]
	I0626 20:00:52.460132   27145 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 20:00:52.460138   27145 command_runner.go:130] > # no_pivot = false
	I0626 20:00:52.460144   27145 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 20:00:52.460152   27145 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 20:00:52.460157   27145 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 20:00:52.460162   27145 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 20:00:52.460168   27145 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 20:00:52.460174   27145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:00:52.460180   27145 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0626 20:00:52.460185   27145 command_runner.go:130] > # Cgroup setting for conmon
	I0626 20:00:52.460193   27145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 20:00:52.460200   27145 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 20:00:52.460209   27145 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 20:00:52.460216   27145 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 20:00:52.460222   27145 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:00:52.460228   27145 command_runner.go:130] > conmon_env = [
	I0626 20:00:52.460233   27145 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0626 20:00:52.460238   27145 command_runner.go:130] > ]
	I0626 20:00:52.460243   27145 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 20:00:52.460251   27145 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 20:00:52.460256   27145 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 20:00:52.460260   27145 command_runner.go:130] > # default_env = [
	I0626 20:00:52.460266   27145 command_runner.go:130] > # ]
	I0626 20:00:52.460276   27145 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 20:00:52.460282   27145 command_runner.go:130] > # selinux = false
	I0626 20:00:52.460288   27145 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 20:00:52.460296   27145 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 20:00:52.460302   27145 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 20:00:52.460308   27145 command_runner.go:130] > # seccomp_profile = ""
	I0626 20:00:52.460316   27145 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 20:00:52.460327   27145 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 20:00:52.460352   27145 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 20:00:52.460363   27145 command_runner.go:130] > # which might increase security.
	I0626 20:00:52.460367   27145 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0626 20:00:52.460373   27145 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 20:00:52.460379   27145 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 20:00:52.460385   27145 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 20:00:52.460391   27145 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 20:00:52.460398   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:00:52.460403   27145 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 20:00:52.460409   27145 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 20:00:52.460413   27145 command_runner.go:130] > # the cgroup blockio controller.
	I0626 20:00:52.460419   27145 command_runner.go:130] > # blockio_config_file = ""
	I0626 20:00:52.460425   27145 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 20:00:52.460431   27145 command_runner.go:130] > # irqbalance daemon.
	I0626 20:00:52.460437   27145 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 20:00:52.460445   27145 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 20:00:52.460450   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:00:52.460458   27145 command_runner.go:130] > # rdt_config_file = ""
	I0626 20:00:52.460464   27145 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 20:00:52.460470   27145 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 20:00:52.460476   27145 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 20:00:52.460480   27145 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 20:00:52.460487   27145 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 20:00:52.460495   27145 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 20:00:52.460499   27145 command_runner.go:130] > # will be added.
	I0626 20:00:52.460503   27145 command_runner.go:130] > # default_capabilities = [
	I0626 20:00:52.460507   27145 command_runner.go:130] > # 	"CHOWN",
	I0626 20:00:52.460515   27145 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 20:00:52.460519   27145 command_runner.go:130] > # 	"FSETID",
	I0626 20:00:52.460523   27145 command_runner.go:130] > # 	"FOWNER",
	I0626 20:00:52.460528   27145 command_runner.go:130] > # 	"SETGID",
	I0626 20:00:52.460532   27145 command_runner.go:130] > # 	"SETUID",
	I0626 20:00:52.460538   27145 command_runner.go:130] > # 	"SETPCAP",
	I0626 20:00:52.460543   27145 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 20:00:52.460549   27145 command_runner.go:130] > # 	"KILL",
	I0626 20:00:52.460639   27145 command_runner.go:130] > # ]
	I0626 20:00:52.460779   27145 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 20:00:52.460797   27145 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:00:52.460804   27145 command_runner.go:130] > # default_sysctls = [
	I0626 20:00:52.460810   27145 command_runner.go:130] > # ]
	I0626 20:00:52.460823   27145 command_runner.go:130] > # List of devices on the host that a
	I0626 20:00:52.460833   27145 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 20:00:52.460841   27145 command_runner.go:130] > # allowed_devices = [
	I0626 20:00:52.460851   27145 command_runner.go:130] > # 	"/dev/fuse",
	I0626 20:00:52.460860   27145 command_runner.go:130] > # ]
	I0626 20:00:52.460875   27145 command_runner.go:130] > # List of additional devices. specified as
	I0626 20:00:52.460890   27145 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 20:00:52.460906   27145 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 20:00:52.462426   27145 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:00:52.462444   27145 command_runner.go:130] > # additional_devices = [
	I0626 20:00:52.462448   27145 command_runner.go:130] > # ]
	I0626 20:00:52.462453   27145 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 20:00:52.462457   27145 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 20:00:52.462461   27145 command_runner.go:130] > # 	"/etc/cdi",
	I0626 20:00:52.462473   27145 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 20:00:52.462478   27145 command_runner.go:130] > # ]
	I0626 20:00:52.462489   27145 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 20:00:52.462503   27145 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 20:00:52.462507   27145 command_runner.go:130] > # Defaults to false.
	I0626 20:00:52.462512   27145 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 20:00:52.462518   27145 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 20:00:52.462527   27145 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 20:00:52.462531   27145 command_runner.go:130] > # hooks_dir = [
	I0626 20:00:52.462538   27145 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 20:00:52.462543   27145 command_runner.go:130] > # ]
	I0626 20:00:52.462556   27145 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 20:00:52.462570   27145 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 20:00:52.462578   27145 command_runner.go:130] > # its default mounts from the following two files:
	I0626 20:00:52.462582   27145 command_runner.go:130] > #
	I0626 20:00:52.462589   27145 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 20:00:52.462597   27145 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 20:00:52.462603   27145 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 20:00:52.462609   27145 command_runner.go:130] > #
	I0626 20:00:52.462615   27145 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 20:00:52.462624   27145 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 20:00:52.462634   27145 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 20:00:52.462639   27145 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 20:00:52.462642   27145 command_runner.go:130] > #
	I0626 20:00:52.462647   27145 command_runner.go:130] > # default_mounts_file = ""
	I0626 20:00:52.462652   27145 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 20:00:52.462660   27145 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 20:00:52.462664   27145 command_runner.go:130] > pids_limit = 1024
	I0626 20:00:52.462672   27145 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 20:00:52.462681   27145 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 20:00:52.462687   27145 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 20:00:52.462697   27145 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 20:00:52.462704   27145 command_runner.go:130] > # log_size_max = -1
	I0626 20:00:52.462711   27145 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 20:00:52.462717   27145 command_runner.go:130] > # log_to_journald = false
	I0626 20:00:52.462727   27145 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 20:00:52.462734   27145 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 20:00:52.462740   27145 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 20:00:52.462747   27145 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 20:00:52.462752   27145 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 20:00:52.462758   27145 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 20:00:52.462764   27145 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 20:00:52.462768   27145 command_runner.go:130] > # read_only = false
	I0626 20:00:52.462774   27145 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 20:00:52.462781   27145 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 20:00:52.462785   27145 command_runner.go:130] > # live configuration reload.
	I0626 20:00:52.462792   27145 command_runner.go:130] > # log_level = "info"
	I0626 20:00:52.462797   27145 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 20:00:52.462802   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:00:52.462806   27145 command_runner.go:130] > # log_filter = ""
	I0626 20:00:52.462812   27145 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 20:00:52.462820   27145 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 20:00:52.462823   27145 command_runner.go:130] > # separated by comma.
	I0626 20:00:52.462828   27145 command_runner.go:130] > # uid_mappings = ""
	I0626 20:00:52.462836   27145 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 20:00:52.462842   27145 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 20:00:52.462848   27145 command_runner.go:130] > # separated by comma.
	I0626 20:00:52.462852   27145 command_runner.go:130] > # gid_mappings = ""
	I0626 20:00:52.462858   27145 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 20:00:52.462866   27145 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:00:52.462872   27145 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:00:52.462879   27145 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 20:00:52.462884   27145 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 20:00:52.462897   27145 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:00:52.462905   27145 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:00:52.462910   27145 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 20:00:52.462916   27145 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 20:00:52.462924   27145 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 20:00:52.462929   27145 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 20:00:52.462936   27145 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 20:00:52.462942   27145 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 20:00:52.462950   27145 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 20:00:52.462955   27145 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 20:00:52.462959   27145 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 20:00:52.462966   27145 command_runner.go:130] > drop_infra_ctr = false
	I0626 20:00:52.462972   27145 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 20:00:52.462980   27145 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 20:00:52.462999   27145 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 20:00:52.463006   27145 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 20:00:52.463012   27145 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 20:00:52.463019   27145 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 20:00:52.463023   27145 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 20:00:52.463032   27145 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 20:00:52.463039   27145 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0626 20:00:52.463044   27145 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 20:00:52.463053   27145 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 20:00:52.463059   27145 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 20:00:52.463063   27145 command_runner.go:130] > # default_runtime = "runc"
	I0626 20:00:52.463069   27145 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 20:00:52.463076   27145 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 20:00:52.463087   27145 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 20:00:52.463095   27145 command_runner.go:130] > # creation as a file is not desired either.
	I0626 20:00:52.463102   27145 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 20:00:52.463109   27145 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 20:00:52.463114   27145 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 20:00:52.463120   27145 command_runner.go:130] > # ]
	I0626 20:00:52.463126   27145 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 20:00:52.463134   27145 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 20:00:52.463142   27145 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 20:00:52.463147   27145 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 20:00:52.463153   27145 command_runner.go:130] > #
	I0626 20:00:52.463157   27145 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 20:00:52.463162   27145 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 20:00:52.463167   27145 command_runner.go:130] > #  runtime_type = "oci"
	I0626 20:00:52.463173   27145 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 20:00:52.463178   27145 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 20:00:52.463184   27145 command_runner.go:130] > #  allowed_annotations = []
	I0626 20:00:52.463188   27145 command_runner.go:130] > # Where:
	I0626 20:00:52.463193   27145 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 20:00:52.463201   27145 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 20:00:52.463209   27145 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 20:00:52.463217   27145 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 20:00:52.463221   27145 command_runner.go:130] > #   in $PATH.
	I0626 20:00:52.463229   27145 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 20:00:52.463234   27145 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 20:00:52.463242   27145 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 20:00:52.463246   27145 command_runner.go:130] > #   state.
	I0626 20:00:52.463254   27145 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 20:00:52.463260   27145 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 20:00:52.463266   27145 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 20:00:52.463276   27145 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 20:00:52.463284   27145 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 20:00:52.463290   27145 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 20:00:52.463297   27145 command_runner.go:130] > #   The currently recognized values are:
	I0626 20:00:52.463303   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 20:00:52.463312   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 20:00:52.463317   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 20:00:52.463325   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 20:00:52.463332   27145 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 20:00:52.463340   27145 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 20:00:52.463346   27145 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 20:00:52.463355   27145 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 20:00:52.463360   27145 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 20:00:52.463364   27145 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 20:00:52.463370   27145 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0626 20:00:52.463374   27145 command_runner.go:130] > runtime_type = "oci"
	I0626 20:00:52.463378   27145 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 20:00:52.463383   27145 command_runner.go:130] > runtime_config_path = ""
	I0626 20:00:52.463389   27145 command_runner.go:130] > monitor_path = ""
	I0626 20:00:52.463393   27145 command_runner.go:130] > monitor_cgroup = ""
	I0626 20:00:52.463397   27145 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 20:00:52.463403   27145 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 20:00:52.463409   27145 command_runner.go:130] > # running containers
	I0626 20:00:52.463413   27145 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 20:00:52.463421   27145 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 20:00:52.463469   27145 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 20:00:52.463478   27145 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 20:00:52.463483   27145 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 20:00:52.463487   27145 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 20:00:52.463491   27145 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 20:00:52.463495   27145 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 20:00:52.463501   27145 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 20:00:52.463506   27145 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 20:00:52.463515   27145 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 20:00:52.463522   27145 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 20:00:52.463529   27145 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 20:00:52.463538   27145 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 20:00:52.463547   27145 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 20:00:52.463553   27145 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 20:00:52.463562   27145 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 20:00:52.463571   27145 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 20:00:52.463579   27145 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 20:00:52.463586   27145 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 20:00:52.463592   27145 command_runner.go:130] > # Example:
	I0626 20:00:52.463597   27145 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 20:00:52.463601   27145 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 20:00:52.463608   27145 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 20:00:52.463613   27145 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 20:00:52.463619   27145 command_runner.go:130] > # cpuset = 0
	I0626 20:00:52.463623   27145 command_runner.go:130] > # cpushares = "0-1"
	I0626 20:00:52.463626   27145 command_runner.go:130] > # Where:
	I0626 20:00:52.463631   27145 command_runner.go:130] > # The workload name is workload-type.
	I0626 20:00:52.463639   27145 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 20:00:52.463647   27145 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 20:00:52.463652   27145 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 20:00:52.463662   27145 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 20:00:52.463669   27145 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 20:00:52.463673   27145 command_runner.go:130] > # 
	I0626 20:00:52.463680   27145 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 20:00:52.463686   27145 command_runner.go:130] > #
	I0626 20:00:52.463691   27145 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 20:00:52.463699   27145 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 20:00:52.463705   27145 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 20:00:52.463713   27145 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 20:00:52.463718   27145 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 20:00:52.463724   27145 command_runner.go:130] > [crio.image]
	I0626 20:00:52.463729   27145 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 20:00:52.463736   27145 command_runner.go:130] > # default_transport = "docker://"
	I0626 20:00:52.463742   27145 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 20:00:52.463750   27145 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:00:52.463754   27145 command_runner.go:130] > # global_auth_file = ""
	I0626 20:00:52.463760   27145 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 20:00:52.463765   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:00:52.463770   27145 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 20:00:52.463778   27145 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 20:00:52.463784   27145 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:00:52.463789   27145 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:00:52.463793   27145 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 20:00:52.463800   27145 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 20:00:52.463806   27145 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 20:00:52.463815   27145 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 20:00:52.463820   27145 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 20:00:52.463827   27145 command_runner.go:130] > # pause_command = "/pause"
	I0626 20:00:52.463832   27145 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 20:00:52.463840   27145 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 20:00:52.463846   27145 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 20:00:52.463854   27145 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 20:00:52.463861   27145 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 20:00:52.463865   27145 command_runner.go:130] > # signature_policy = ""
	I0626 20:00:52.463873   27145 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 20:00:52.463879   27145 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 20:00:52.463882   27145 command_runner.go:130] > # changing them here.
	I0626 20:00:52.463886   27145 command_runner.go:130] > # insecure_registries = [
	I0626 20:00:52.463892   27145 command_runner.go:130] > # ]
	I0626 20:00:52.463898   27145 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 20:00:52.463905   27145 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 20:00:52.463909   27145 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 20:00:52.463915   27145 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 20:00:52.463919   27145 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 20:00:52.463925   27145 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 20:00:52.463931   27145 command_runner.go:130] > # CNI plugins.
	I0626 20:00:52.463935   27145 command_runner.go:130] > [crio.network]
	I0626 20:00:52.463943   27145 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 20:00:52.463948   27145 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 20:00:52.463954   27145 command_runner.go:130] > # cni_default_network = ""
	I0626 20:00:52.463960   27145 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 20:00:52.463966   27145 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 20:00:52.463971   27145 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 20:00:52.463977   27145 command_runner.go:130] > # plugin_dirs = [
	I0626 20:00:52.463981   27145 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 20:00:52.463984   27145 command_runner.go:130] > # ]
	I0626 20:00:52.463989   27145 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 20:00:52.463995   27145 command_runner.go:130] > [crio.metrics]
	I0626 20:00:52.464000   27145 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 20:00:52.464004   27145 command_runner.go:130] > enable_metrics = true
	I0626 20:00:52.464010   27145 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 20:00:52.464015   27145 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 20:00:52.464023   27145 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 20:00:52.464029   27145 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 20:00:52.464036   27145 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 20:00:52.464040   27145 command_runner.go:130] > # metrics_collectors = [
	I0626 20:00:52.464047   27145 command_runner.go:130] > # 	"operations",
	I0626 20:00:52.464054   27145 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 20:00:52.464059   27145 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 20:00:52.464065   27145 command_runner.go:130] > # 	"operations_errors",
	I0626 20:00:52.464069   27145 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 20:00:52.464075   27145 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 20:00:52.464080   27145 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 20:00:52.464084   27145 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 20:00:52.464091   27145 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 20:00:52.464094   27145 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 20:00:52.464098   27145 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 20:00:52.464104   27145 command_runner.go:130] > # 	"containers_oom_total",
	I0626 20:00:52.464108   27145 command_runner.go:130] > # 	"containers_oom",
	I0626 20:00:52.464113   27145 command_runner.go:130] > # 	"processes_defunct",
	I0626 20:00:52.464116   27145 command_runner.go:130] > # 	"operations_total",
	I0626 20:00:52.464123   27145 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 20:00:52.464127   27145 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 20:00:52.464133   27145 command_runner.go:130] > # 	"operations_errors_total",
	I0626 20:00:52.464139   27145 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 20:00:52.464146   27145 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 20:00:52.464150   27145 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 20:00:52.464157   27145 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 20:00:52.464161   27145 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 20:00:52.464168   27145 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 20:00:52.464171   27145 command_runner.go:130] > # ]
	I0626 20:00:52.464178   27145 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 20:00:52.464181   27145 command_runner.go:130] > # metrics_port = 9090
	I0626 20:00:52.464188   27145 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 20:00:52.464192   27145 command_runner.go:130] > # metrics_socket = ""
	I0626 20:00:52.464198   27145 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 20:00:52.464206   27145 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 20:00:52.464212   27145 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 20:00:52.464218   27145 command_runner.go:130] > # certificate on any modification event.
	I0626 20:00:52.464222   27145 command_runner.go:130] > # metrics_cert = ""
	I0626 20:00:52.464228   27145 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 20:00:52.464233   27145 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 20:00:52.464242   27145 command_runner.go:130] > # metrics_key = ""
	I0626 20:00:52.464249   27145 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 20:00:52.464253   27145 command_runner.go:130] > [crio.tracing]
	I0626 20:00:52.464261   27145 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 20:00:52.464265   27145 command_runner.go:130] > # enable_tracing = false
	I0626 20:00:52.464283   27145 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 20:00:52.464290   27145 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 20:00:52.464295   27145 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 20:00:52.464300   27145 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 20:00:52.464305   27145 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 20:00:52.464311   27145 command_runner.go:130] > [crio.stats]
	I0626 20:00:52.464317   27145 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 20:00:52.464324   27145 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 20:00:52.464328   27145 command_runner.go:130] > # stats_collection_period = 0
	I0626 20:00:52.464355   27145 command_runner.go:130] ! time="2023-06-26 20:00:52.421077756Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0626 20:00:52.464367   27145 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 20:00:52.464427   27145 cni.go:84] Creating CNI manager for ""
	I0626 20:00:52.464435   27145 cni.go:137] 2 nodes found, recommending kindnet
	I0626 20:00:52.464445   27145 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:00:52.464461   27145 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.133 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-050558 NodeName:multinode-050558-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:00:52.464556   27145 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-050558-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:00:52.464600   27145 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-050558-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:00:52.464646   27145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:00:52.473984   27145 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	I0626 20:00:52.474084   27145 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.27.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	
	Initiating transfer...
	I0626 20:00:52.474141   27145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.27.3
	I0626 20:00:52.483070   27145 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256
	I0626 20:00:52.483090   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubectl -> /var/lib/minikube/binaries/v1.27.3/kubectl
	I0626 20:00:52.483151   27145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl
	I0626 20:00:52.483199   27145 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubeadm
	I0626 20:00:52.483257   27145 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubelet
	I0626 20:00:52.487469   27145 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0626 20:00:52.487502   27145 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0626 20:00:52.487525   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubectl --> /var/lib/minikube/binaries/v1.27.3/kubectl (49258496 bytes)
	I0626 20:00:55.203093   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubeadm -> /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0626 20:00:55.203168   27145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0626 20:00:55.208008   27145 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0626 20:00:55.208042   27145 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0626 20:00:55.208070   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubeadm --> /var/lib/minikube/binaries/v1.27.3/kubeadm (48160768 bytes)
	I0626 20:00:56.692335   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:00:56.706839   27145 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubelet -> /var/lib/minikube/binaries/v1.27.3/kubelet
	I0626 20:00:56.706945   27145 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet
	I0626 20:00:56.711784   27145 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0626 20:00:56.711878   27145 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0626 20:00:56.711918   27145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubelet --> /var/lib/minikube/binaries/v1.27.3/kubelet (106160128 bytes)
	I0626 20:00:57.193133   27145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0626 20:00:57.202662   27145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0626 20:00:57.219256   27145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:00:57.235271   27145 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0626 20:00:57.238952   27145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:00:57.250544   27145 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:00:57.250786   27145 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:00:57.250926   27145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:00:57.250978   27145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:00:57.265272   27145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0626 20:00:57.265744   27145 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:00:57.266226   27145 main.go:141] libmachine: Using API Version  1
	I0626 20:00:57.266246   27145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:00:57.266576   27145 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:00:57.266778   27145 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:00:57.266935   27145 start.go:301] JoinCluster: &{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:00:57.267024   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0626 20:00:57.267048   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:00:57.269907   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:00:57.270331   27145 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:00:57.270361   27145 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:00:57.270478   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:00:57.270673   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:00:57.270876   27145 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:00:57.271013   27145 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:00:57.438714   27145 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jh7108.w4wkwozxs8gc10uh --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:00:57.441875   27145 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:00:57.441931   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jh7108.w4wkwozxs8gc10uh --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-050558-m02"
	I0626 20:00:57.491618   27145 command_runner.go:130] > [preflight] Running pre-flight checks
	I0626 20:00:57.633397   27145 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0626 20:00:57.633444   27145 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0626 20:00:57.675785   27145 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:00:57.675815   27145 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:00:57.675823   27145 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 20:00:57.792369   27145 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0626 20:00:59.836364   27145 command_runner.go:130] > This node has joined the cluster:
	I0626 20:00:59.836393   27145 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0626 20:00:59.836403   27145 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0626 20:00:59.836413   27145 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0626 20:00:59.838480   27145 command_runner.go:130] ! W0626 20:00:57.470926     823 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0626 20:00:59.838507   27145 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:00:59.838529   27145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jh7108.w4wkwozxs8gc10uh --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-050558-m02": (2.396582238s)
	I0626 20:00:59.838554   27145 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0626 20:01:00.074334   27145 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0626 20:01:00.074370   27145 start.go:303] JoinCluster complete in 2.807434895s
	I0626 20:01:00.074383   27145 cni.go:84] Creating CNI manager for ""
	I0626 20:01:00.074389   27145 cni.go:137] 2 nodes found, recommending kindnet
	I0626 20:01:00.074443   27145 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 20:01:00.080998   27145 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 20:01:00.081030   27145 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0626 20:01:00.081040   27145 command_runner.go:130] > Device: 11h/17d	Inode: 3543        Links: 1
	I0626 20:01:00.081049   27145 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:01:00.081059   27145 command_runner.go:130] > Access: 2023-06-26 19:59:35.511112889 +0000
	I0626 20:01:00.081066   27145 command_runner.go:130] > Modify: 2023-06-22 22:21:30.000000000 +0000
	I0626 20:01:00.081072   27145 command_runner.go:130] > Change: 2023-06-26 19:59:33.743112889 +0000
	I0626 20:01:00.081076   27145 command_runner.go:130] >  Birth: -
	I0626 20:01:00.081123   27145 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 20:01:00.081133   27145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 20:01:00.099707   27145 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 20:01:00.512480   27145 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:01:00.517048   27145 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:01:00.521465   27145 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0626 20:01:00.538149   27145 command_runner.go:130] > daemonset.apps/kindnet configured
	I0626 20:01:00.542400   27145 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:01:00.542614   27145 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:01:00.542932   27145 round_trippers.go:463] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:01:00.542944   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:00.542952   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:00.542959   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:00.545705   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:00.545731   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:00.545742   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:00.545750   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:00.545759   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:00.545769   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:00.545784   27145 round_trippers.go:580]     Content-Length: 291
	I0626 20:01:00.545793   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:00 GMT
	I0626 20:01:00.545799   27145 round_trippers.go:580]     Audit-Id: 0e5099d3-06ca-40f2-b68c-abdcb5a9f075
	I0626 20:01:00.545827   27145 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"408","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0626 20:01:00.545929   27145 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-050558" context rescaled to 1 replicas
	I0626 20:01:00.545960   27145 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:01:00.548237   27145 out.go:177] * Verifying Kubernetes components...
	I0626 20:01:00.549909   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:01:00.575683   27145 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:01:00.576001   27145 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:01:00.576312   27145 node_ready.go:35] waiting up to 6m0s for node "multinode-050558-m02" to be "Ready" ...
	I0626 20:01:00.576392   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:00.576403   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:00.576414   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:00.576427   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:00.579539   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:00.579572   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:00.579583   27145 round_trippers.go:580]     Audit-Id: 0f2cbb9b-fa98-4b13-a6f5-e9c7e7ba3b90
	I0626 20:01:00.579595   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:00.579607   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:00.579618   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:00.579630   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:00.579640   27145 round_trippers.go:580]     Content-Length: 3531
	I0626 20:01:00.579651   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:00 GMT
	I0626 20:01:00.579768   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"453","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0626 20:01:01.080771   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:01.080795   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:01.080808   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:01.080819   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:01.083791   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:01.083817   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:01.083828   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:01 GMT
	I0626 20:01:01.083837   27145 round_trippers.go:580]     Audit-Id: cd136724-3075-41de-9583-aaed012fba19
	I0626 20:01:01.083846   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:01.083855   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:01.083865   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:01.083876   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:01.083886   27145 round_trippers.go:580]     Content-Length: 3531
	I0626 20:01:01.083970   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"453","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I0626 20:01:01.580985   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:01.581008   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:01.581016   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:01.581022   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:01.584002   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:01.584027   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:01.584037   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:01.584046   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:01.584055   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:01.584063   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:01.584077   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:01 GMT
	I0626 20:01:01.584089   27145 round_trippers.go:580]     Audit-Id: 342fc694-9518-4fa4-bd59-9d96958a6dea
	I0626 20:01:01.584097   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:01.584184   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:02.080787   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:02.080810   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:02.080818   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:02.080824   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:02.085473   27145 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:01:02.085501   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:02.085511   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:02.085520   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:02.085529   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:02.085537   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:02.085546   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:02 GMT
	I0626 20:01:02.085558   27145 round_trippers.go:580]     Audit-Id: 37db132d-0d14-455c-97c7-d73e3210031a
	I0626 20:01:02.085570   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:02.085742   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:02.581061   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:02.581081   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:02.581091   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:02.581100   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:02.583868   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:02.583894   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:02.583912   27145 round_trippers.go:580]     Audit-Id: 02d925cf-8ff5-4015-87a9-498069296922
	I0626 20:01:02.583921   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:02.583929   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:02.583937   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:02.583950   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:02.583958   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:02.583969   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:02 GMT
	I0626 20:01:02.584021   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:02.584297   27145 node_ready.go:58] node "multinode-050558-m02" has status "Ready":"False"
	I0626 20:01:03.080774   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:03.080794   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:03.080803   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:03.080814   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:03.084300   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:03.084326   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:03.084335   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:03 GMT
	I0626 20:01:03.084348   27145 round_trippers.go:580]     Audit-Id: e87836d6-8313-490a-8229-1a9473f311db
	I0626 20:01:03.084357   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:03.084372   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:03.084397   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:03.084406   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:03.084415   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:03.084556   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:03.580993   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:03.581013   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:03.581021   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:03.581034   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:03.583483   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:03.583507   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:03.583517   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:03.583525   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:03.583533   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:03.583541   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:03.583551   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:03.583559   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:03 GMT
	I0626 20:01:03.583571   27145 round_trippers.go:580]     Audit-Id: 79c84bfe-3012-4ce5-8959-a5eaedf8018c
	I0626 20:01:03.583685   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:04.080278   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:04.080305   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:04.080316   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:04.080323   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:04.083178   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:04.083197   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:04.083205   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:04.083212   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:04.083221   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:04.083230   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:04.083239   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:04.083257   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:04 GMT
	I0626 20:01:04.083265   27145 round_trippers.go:580]     Audit-Id: 101a7c7f-aa09-48b5-8556-c870d9a1a732
	I0626 20:01:04.083350   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:04.580876   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:04.580897   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:04.580910   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:04.580916   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:04.583780   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:04.583805   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:04.583819   27145 round_trippers.go:580]     Audit-Id: 525b4fa8-75b0-4a91-980a-415e4f9e9087
	I0626 20:01:04.583828   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:04.583836   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:04.583845   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:04.583853   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:04.583863   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:04.583875   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:04 GMT
	I0626 20:01:04.583958   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:05.080488   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:05.080510   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:05.080518   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:05.080524   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:05.083384   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:05.083410   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:05.083420   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:05 GMT
	I0626 20:01:05.083429   27145 round_trippers.go:580]     Audit-Id: 6d47d1ae-438f-48ab-af8c-488b0dc6e354
	I0626 20:01:05.083437   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:05.083445   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:05.083453   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:05.083461   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:05.083469   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:05.083560   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:05.083850   27145 node_ready.go:58] node "multinode-050558-m02" has status "Ready":"False"
	I0626 20:01:05.581132   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:05.581151   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:05.581160   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:05.581166   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:05.584295   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:05.584315   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:05.584322   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:05.584329   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:05.584339   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:05 GMT
	I0626 20:01:05.584347   27145 round_trippers.go:580]     Audit-Id: 1238be13-d886-4553-bd9e-8eb3367bf796
	I0626 20:01:05.584359   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:05.584370   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:05.584377   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:05.584423   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:06.081031   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:06.081054   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:06.081062   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:06.081068   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:06.084025   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:06.084052   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:06.084061   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:06.084073   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:06 GMT
	I0626 20:01:06.084082   27145 round_trippers.go:580]     Audit-Id: fc90bc5d-3e40-422b-b61c-4dadaf638036
	I0626 20:01:06.084090   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:06.084099   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:06.084108   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:06.084117   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:06.084223   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:06.580214   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:06.580236   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:06.580245   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:06.580251   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:06.582768   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:06.582787   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:06.582794   27145 round_trippers.go:580]     Audit-Id: 64b411d1-858f-4920-a214-8b570b4ea80e
	I0626 20:01:06.582800   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:06.582805   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:06.582811   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:06.582816   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:06.582822   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:06.582827   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:06 GMT
	I0626 20:01:06.582895   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:07.080617   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:07.080645   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:07.080656   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:07.080665   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:07.083380   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:07.083405   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:07.083416   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:07 GMT
	I0626 20:01:07.083424   27145 round_trippers.go:580]     Audit-Id: 2b61b68e-05d9-46dd-bca4-5792dc7917e0
	I0626 20:01:07.083432   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:07.083440   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:07.083452   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:07.083460   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:07.083469   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:07.083554   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:07.580846   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:07.580870   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:07.580878   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:07.580884   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:07.584290   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:07.584315   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:07.584325   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:07.584333   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:07.584341   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:07.584348   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:07.584357   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:07.584365   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:07 GMT
	I0626 20:01:07.584374   27145 round_trippers.go:580]     Audit-Id: cd5bdd01-20df-41b4-ad4e-f9ae466b6dcf
	I0626 20:01:07.584469   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:07.584766   27145 node_ready.go:58] node "multinode-050558-m02" has status "Ready":"False"
	I0626 20:01:08.080545   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:08.080568   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:08.080576   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:08.080583   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:08.083699   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:08.083723   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:08.083730   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:08.083736   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:08.083741   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:08.083747   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:08.083752   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:08 GMT
	I0626 20:01:08.083757   27145 round_trippers.go:580]     Audit-Id: f7e04df7-faa3-4309-9a30-f1fa70c5ed2e
	I0626 20:01:08.083763   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:08.083831   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:08.580380   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:08.580403   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:08.580411   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:08.580417   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:08.584098   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:08.584119   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:08.584128   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:08.584136   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:08.584144   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:08.584153   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:08.584166   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:08 GMT
	I0626 20:01:08.584179   27145 round_trippers.go:580]     Audit-Id: 8ef05248-e15f-42f4-95d2-63901edaa938
	I0626 20:01:08.584189   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:08.584283   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:09.081008   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:09.081038   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.081050   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.081060   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.084820   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:09.084848   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.084858   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.084864   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.084869   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.084874   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.084880   27145 round_trippers.go:580]     Content-Length: 3640
	I0626 20:01:09.084885   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.084890   27145 round_trippers.go:580]     Audit-Id: 3e3c2417-b386-4b8e-82da-18bdf529ffa9
	I0626 20:01:09.085033   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"467","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0626 20:01:09.580685   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:09.580714   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.580724   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.580734   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.583706   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:09.583725   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.583733   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.583739   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.583744   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.583757   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.583768   27145 round_trippers.go:580]     Content-Length: 3726
	I0626 20:01:09.583779   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.583789   27145 round_trippers.go:580]     Audit-Id: 3c1d5347-79c9-46aa-a023-c5d0901b8f18
	I0626 20:01:09.583858   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"491","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0626 20:01:09.584100   27145 node_ready.go:49] node "multinode-050558-m02" has status "Ready":"True"
	I0626 20:01:09.584115   27145 node_ready.go:38] duration metric: took 9.007784753s waiting for node "multinode-050558-m02" to be "Ready" ...
	I0626 20:01:09.584123   27145 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:01:09.584188   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:01:09.584198   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.584205   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.584211   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.589742   27145 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0626 20:01:09.589762   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.589772   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.589781   27145 round_trippers.go:580]     Audit-Id: e6d3ef46-40bb-4e58-b56b-9436a48d0e09
	I0626 20:01:09.589788   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.589796   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.589803   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.589811   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.590589   27145 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"491"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"404","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67374 chars]
	I0626 20:01:09.592558   27145 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.592614   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:01:09.592620   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.592631   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.592640   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.597476   27145 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:01:09.597497   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.597508   27145 round_trippers.go:580]     Audit-Id: aa5a1505-6019-4daf-8893-6a0e87f63a16
	I0626 20:01:09.597516   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.597531   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.597543   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.597549   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.597558   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.597868   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"404","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0626 20:01:09.598381   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:09.598404   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.598414   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.598424   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.601919   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:09.601941   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.601951   27145 round_trippers.go:580]     Audit-Id: d38e4521-bf89-4583-accb-f969edc2bde9
	I0626 20:01:09.601959   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.601967   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.601976   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.601984   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.601996   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.602854   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:01:09.603199   27145 pod_ready.go:92] pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:09.603214   27145 pod_ready.go:81] duration metric: took 10.639291ms waiting for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.603222   27145 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.603266   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:01:09.603274   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.603280   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.603287   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.606089   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:09.606102   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.606109   27145 round_trippers.go:580]     Audit-Id: 7fbd100b-adfd-40b7-aac7-a4640b4a4b06
	I0626 20:01:09.606114   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.606119   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.606124   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.606130   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.606138   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.606862   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"411","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0626 20:01:09.607218   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:09.607230   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.607237   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.607243   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.609088   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:01:09.609105   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.609111   27145 round_trippers.go:580]     Audit-Id: cc61a19e-ef5f-4fb8-9e7c-99af14ccf9e1
	I0626 20:01:09.609116   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.609123   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.609128   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.609133   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.609138   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.609387   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:01:09.609656   27145 pod_ready.go:92] pod "etcd-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:09.609667   27145 pod_ready.go:81] duration metric: took 6.440051ms waiting for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.609679   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.609716   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:01:09.609723   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.609729   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.609735   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.611534   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:01:09.611550   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.611559   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.611565   27145 round_trippers.go:580]     Audit-Id: a990f8c7-3ad2-44e7-86cb-f19254a49caa
	I0626 20:01:09.611570   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.611578   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.611588   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.611597   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.611782   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"412","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0626 20:01:09.612110   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:09.612120   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.612126   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.612137   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.613872   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:01:09.613886   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.613892   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.613898   27145 round_trippers.go:580]     Audit-Id: 3e9640e9-8fe2-4971-bf6e-f805046d1124
	I0626 20:01:09.613903   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.613908   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.613913   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.613918   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.614201   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:01:09.614448   27145 pod_ready.go:92] pod "kube-apiserver-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:09.614457   27145 pod_ready.go:81] duration metric: took 4.773245ms waiting for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.614466   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.614502   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-050558
	I0626 20:01:09.614509   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.614515   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.614521   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.616426   27145 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:01:09.616443   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.616454   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.616459   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.616465   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.616479   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.616485   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.616493   27145 round_trippers.go:580]     Audit-Id: 84cb12f9-bccc-480d-9f51-403207ca4585
	I0626 20:01:09.616834   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-050558","namespace":"kube-system","uid":"d90eb1a6-03bd-4bdf-b50d-9448cef0b578","resourceVersion":"409","creationTimestamp":"2023-06-26T20:00:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.mirror":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.seen":"2023-06-26T20:00:04.802665770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0626 20:01:09.617173   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:09.617184   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.617191   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.617197   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.619318   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:09.619333   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.619340   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.619345   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.619351   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.619356   27145 round_trippers.go:580]     Audit-Id: 0b833e60-c57e-465b-bfd7-d6befaf30030
	I0626 20:01:09.619361   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.619366   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.619493   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:01:09.619735   27145 pod_ready.go:92] pod "kube-controller-manager-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:09.619746   27145 pod_ready.go:81] duration metric: took 5.273962ms waiting for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.619753   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.781123   27145 request.go:628] Waited for 161.314667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:01:09.781195   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:01:09.781200   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.781207   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.781214   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.784088   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:09.784119   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.784128   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.784135   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.784143   27145 round_trippers.go:580]     Audit-Id: 16fcf630-201a-42ab-b376-5c4613651de9
	I0626 20:01:09.784151   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.784160   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.784170   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.784434   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-67x99","generateName":"kube-proxy-","namespace":"kube-system","uid":"7ffa817a-1b4a-41a1-9a56-5c65849dc57e","resourceVersion":"377","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 20:01:09.981292   27145 request.go:628] Waited for 196.36187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:09.981346   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:09.981351   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:09.981364   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:09.981391   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:09.984636   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:09.984660   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:09.984671   27145 round_trippers.go:580]     Audit-Id: 47bfc9e9-4c74-48cd-9c79-ad69595b65e7
	I0626 20:01:09.984682   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:09.984694   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:09.984702   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:09.984714   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:09.984725   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:09 GMT
	I0626 20:01:09.984864   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:01:09.985186   27145 pod_ready.go:92] pod "kube-proxy-67x99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:09.985200   27145 pod_ready.go:81] duration metric: took 365.44209ms waiting for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:09.985215   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:10.181678   27145 request.go:628] Waited for 196.38959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:01:10.181726   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:01:10.181731   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:10.181758   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:10.181768   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:10.184603   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:10.184621   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:10.184628   27145 round_trippers.go:580]     Audit-Id: 5355dbcc-b4c1-4242-b3c2-70cb52e09472
	I0626 20:01:10.184634   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:10.184640   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:10.184649   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:10.184657   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:10.184667   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:10 GMT
	I0626 20:01:10.184783   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wwg6x","generateName":"kube-proxy-","namespace":"kube-system","uid":"bdb04dda-dd36-45be-8f0e-7dad2bce1ef0","resourceVersion":"478","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0626 20:01:10.381614   27145 request.go:628] Waited for 196.402417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:10.381665   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:01:10.381672   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:10.381679   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:10.381703   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:10.384566   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:10.384584   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:10.384591   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:10.384597   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:10.384602   27145 round_trippers.go:580]     Content-Length: 3726
	I0626 20:01:10.384608   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:10 GMT
	I0626 20:01:10.384613   27145 round_trippers.go:580]     Audit-Id: e379393e-9571-4e22-8796-d36b310bbcfe
	I0626 20:01:10.384626   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:10.384634   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:10.384731   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"491","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0626 20:01:10.384972   27145 pod_ready.go:92] pod "kube-proxy-wwg6x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:10.384987   27145 pod_ready.go:81] duration metric: took 399.762928ms waiting for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:10.384997   27145 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:10.581490   27145 request.go:628] Waited for 196.413982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:01:10.581542   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:01:10.581547   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:10.581554   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:10.581561   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:10.584499   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:10.584518   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:10.584525   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:10.584535   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:10.584543   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:10 GMT
	I0626 20:01:10.584553   27145 round_trippers.go:580]     Audit-Id: b9c6eec7-c352-4cca-b448-d1fac1519154
	I0626 20:01:10.584561   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:10.584569   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:10.584664   27145 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-050558","namespace":"kube-system","uid":"1645e687-25f4-49b9-9d11-5f3db01fe7d2","resourceVersion":"410","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.mirror":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.seen":"2023-06-26T19:59:55.756274617Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0626 20:01:10.780841   27145 request.go:628] Waited for 195.49856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:10.780911   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:01:10.780916   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:10.780923   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:10.780930   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:10.784490   27145 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:01:10.784513   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:10.784523   27145 round_trippers.go:580]     Audit-Id: 5a8d6e70-fb9f-4e1c-b74e-62f6129e3e85
	I0626 20:01:10.784531   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:10.784538   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:10.784547   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:10.784556   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:10.784564   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:10 GMT
	I0626 20:01:10.784829   27145 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0626 20:01:10.785135   27145 pod_ready.go:92] pod "kube-scheduler-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:01:10.785147   27145 pod_ready.go:81] duration metric: took 400.142298ms waiting for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:01:10.785156   27145 pod_ready.go:38] duration metric: took 1.201023873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:01:10.785177   27145 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:01:10.785222   27145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:01:10.799153   27145 system_svc.go:56] duration metric: took 13.976979ms WaitForService to wait for kubelet.
	I0626 20:01:10.799182   27145 kubeadm.go:581] duration metric: took 10.25320109s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:01:10.799208   27145 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:01:10.981642   27145 request.go:628] Waited for 182.35712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0626 20:01:10.981713   27145 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes
	I0626 20:01:10.981725   27145 round_trippers.go:469] Request Headers:
	I0626 20:01:10.981736   27145 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:01:10.981750   27145 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:01:10.984547   27145 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:01:10.984566   27145 round_trippers.go:577] Response Headers:
	I0626 20:01:10.984574   27145 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:01:10.984579   27145 round_trippers.go:580]     Content-Type: application/json
	I0626 20:01:10.984585   27145 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:01:10.984590   27145 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:01:10.984595   27145 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:01:10 GMT
	I0626 20:01:10.984600   27145 round_trippers.go:580]     Audit-Id: 2f6da646-401f-4a76-9c2c-39b09f73db1e
	I0626 20:01:10.984836   27145 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"387","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I0626 20:01:10.985233   27145 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:01:10.985253   27145 node_conditions.go:123] node cpu capacity is 2
	I0626 20:01:10.985265   27145 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:01:10.985270   27145 node_conditions.go:123] node cpu capacity is 2
	I0626 20:01:10.985274   27145 node_conditions.go:105] duration metric: took 186.061467ms to run NodePressure ...
	I0626 20:01:10.985283   27145 start.go:228] waiting for startup goroutines ...
	I0626 20:01:10.985308   27145 start.go:242] writing updated cluster config ...
	I0626 20:01:10.985616   27145 ssh_runner.go:195] Run: rm -f paused
	I0626 20:01:11.032047   27145 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:01:11.035014   27145 out.go:177] * Done! kubectl is now configured to use "multinode-050558" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 19:59:34 UTC, ends at Mon 2023-06-26 20:01:19 UTC. --
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.151803546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72ed1f40-68ae-4697-9df1-8dfdfe6e931e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.152001532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72ed1f40-68ae-4697-9df1-8dfdfe6e931e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.187886190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=51f9d1d1-4772-47b3-abec-18134b7ad4a0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.187950009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=51f9d1d1-4772-47b3-abec-18134b7ad4a0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.188235983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=51f9d1d1-4772-47b3-abec-18134b7ad4a0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.225929947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56609954-1d51-4272-a821-de5b6c653892 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.225992765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56609954-1d51-4272-a821-de5b6c653892 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.226273208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56609954-1d51-4272-a821-de5b6c653892 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.232625459Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=a2a92d3a-6c98-478f-829b-739b742a5a0f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.232858315Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-xw4h2,Uid:e30f039c-5595-4af7-88c3-f7b1fbb71fef,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809672171916132,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:01:11.838703941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fd433ce1-f37e-4168-930f-a93cd00821cb,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1687809622340393613,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-06-26T20:00:21.994829321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-5wffn,Uid:c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809622313491481,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:00:21.984234522Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&PodSandboxMetadata{Name:kindnet-vjpzs,Uid:695a59a7-ddfd-4f5f-8084-86279daa17b6,Namespace:kube-system,Attempt:
0,},State:SANDBOX_READY,CreatedAt:1687809617198523029,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695a59a7-ddfd-4f5f-8084-86279daa17b6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:00:16.836234032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&PodSandboxMetadata{Name:kube-proxy-67x99,Uid:7ffa817a-1b4a-41a1-9a56-5c65849dc57e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809617171574203,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc57e,k8s-app: kube-proxy,pod-templa
te-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:00:16.839193562Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-050558,Uid:ce8b8fdad19a87f17af5276f1f8a428a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809596292367540,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce8b8fdad19a87f17af5276f1f8a428a,kubernetes.io/config.seen: 2023-06-26T19:59:55.756273823Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&PodSandboxMetadata
{Name:kube-apiserver-multinode-050558,Uid:3bf9120f8ca60da96af0ed761aeff36b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809596279401178,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.229:8443,kubernetes.io/config.hash: 3bf9120f8ca60da96af0ed761aeff36b,kubernetes.io/config.seen: 2023-06-26T19:59:55.756272769Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-050558,Uid:fb51be42b8f4d7cafa13e10ab353dbbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809596261675914,Labels:map[string]string{component: kube-scheduler,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fb51be42b8f4d7cafa13e10ab353dbbb,kubernetes.io/config.seen: 2023-06-26T19:59:55.756274617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&PodSandboxMetadata{Name:etcd-multinode-050558,Uid:a51ca9066ce980968640db5826cdbb03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687809596256986918,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.229:2379,kubernet
es.io/config.hash: a51ca9066ce980968640db5826cdbb03,kubernetes.io/config.seen: 2023-06-26T19:59:55.756268397Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=a2a92d3a-6c98-478f-829b-739b742a5a0f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.233433054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=84f70823-9ebf-430b-b88f-e15e173686fe name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.233481145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=84f70823-9ebf-430b-b88f-e15e173686fe name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.233653721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=84f70823-9ebf-430b-b88f-e15e173686fe name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.264014601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6c870572-65ff-4d5b-b038-3566c6c1e616 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.264078615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6c870572-65ff-4d5b-b038-3566c6c1e616 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.264355995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6c870572-65ff-4d5b-b038-3566c6c1e616 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.294785317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f3e47e8-e014-4405-9013-04a63a793e61 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.294845200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f3e47e8-e014-4405-9013-04a63a793e61 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.295027124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f3e47e8-e014-4405-9013-04a63a793e61 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.337769118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=687c3012-15c1-4e5c-880f-94e38fb34474 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.337831989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=687c3012-15c1-4e5c-880f-94e38fb34474 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.338022585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=687c3012-15c1-4e5c-880f-94e38fb34474 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.371897407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bc763b8-c8a5-4759-a532-e2eda851e068 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.371965433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bc763b8-c8a5-4759-a532-e2eda851e068 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:01:19 multinode-050558 crio[716]: time="2023-06-26 20:01:19.372239208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3b64ed5b5e96561385c241af15580b728aa78afb14c290eca33342d8d4ab80c,PodSandboxId:746b2d1ebaf280c591133ddf9f4985b5ea1a93e18d0383484559f3ccc75537bf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687809675219869699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e,PodSandboxId:de6be05536891230b8b860b6d4e9635055f07dda1309bb4ee525a671ceb2d4b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687809622993855330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acaa7aec1773eee3c9747d1b7d71436ca1286f1c51936ed40b38ccfb32202267,PodSandboxId:18226b58e3a6fa02fd9f8b2075465fb16efed3fe4d8263f785b5949a31218cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687809622862831762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43,PodSandboxId:c0c038ff1186d4f6bb2a3b30b8d7763b9e8533a01863eb16848b40742cb2295f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687809620447681392,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f,PodSandboxId:7f5be22b79075fb5fedd1c16c75047c1b6a6e28fac76be252cefe869a5e0e00c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687809618358233696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849
dc57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73,PodSandboxId:4781abc06a4e065bd8d28122ff40322bdd92383b58df1f46964631604411ffd7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687809597092076145,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf,PodSandboxId:eb56a3491c08fdea610b644f0580731d671a3f114d5fc9e364dff7082fd392bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687809597097250754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.container.ha
sh: c6ef1c5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7,PodSandboxId:93da85c83965ceb52c487fe3af13cd4e520ae28fe8b5dfb45881ad852a39757d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687809597105382120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f,PodSandboxId:ed4c79f8fb409ac39d6faf7af94972bcae8573ae501b9586825a7760ec33823a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687809597044675245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.
container.hash: fc09cd2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bc763b8-c8a5-4759-a532-e2eda851e068 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	e3b64ed5b5e96       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   746b2d1ebaf28
	ffdcc10b8c820       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      56 seconds ago       Running             coredns                   0                   de6be05536891
	acaa7aec1773e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       0                   18226b58e3a6f
	6e557f9c77832       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      58 seconds ago       Running             kindnet-cni               0                   c0c038ff1186d
	5db2185f6d056       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      About a minute ago   Running             kube-proxy                0                   7f5be22b79075
	56b2c6a2c5f6b       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      About a minute ago   Running             kube-controller-manager   0                   93da85c83965c
	c5176444c7bdb       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   eb56a3491c08f
	392d50d8b2da7       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      About a minute ago   Running             kube-scheduler            0                   4781abc06a4e0
	f74a9c2e5ef75       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      About a minute ago   Running             kube-apiserver            0                   ed4c79f8fb409
	
	* 
	* ==> coredns [ffdcc10b8c820181edf013036b5e4de63b005ba3f67211bc78a8cfb62ce5a67e] <==
	* [INFO] 10.244.0.3:34474 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000258202s
	[INFO] 10.244.1.2:39092 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291149s
	[INFO] 10.244.1.2:40179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186009s
	[INFO] 10.244.1.2:58060 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00009745s
	[INFO] 10.244.1.2:33426 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074284s
	[INFO] 10.244.1.2:44906 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001465901s
	[INFO] 10.244.1.2:40778 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137229s
	[INFO] 10.244.1.2:34780 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085515s
	[INFO] 10.244.1.2:54778 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068874s
	[INFO] 10.244.0.3:47330 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016034s
	[INFO] 10.244.0.3:53202 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051627s
	[INFO] 10.244.0.3:60034 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000027885s
	[INFO] 10.244.0.3:39596 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039988s
	[INFO] 10.244.1.2:59854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135825s
	[INFO] 10.244.1.2:54718 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159511s
	[INFO] 10.244.1.2:43198 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089035s
	[INFO] 10.244.1.2:58604 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076532s
	[INFO] 10.244.0.3:38199 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124421s
	[INFO] 10.244.0.3:44260 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147484s
	[INFO] 10.244.0.3:42204 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184473s
	[INFO] 10.244.0.3:42846 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108895s
	[INFO] 10.244.1.2:37689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142948s
	[INFO] 10.244.1.2:42940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095689s
	[INFO] 10.244.1.2:34719 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00015967s
	[INFO] 10.244.1.2:52182 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136403s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-050558
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-050558
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=multinode-050558
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_00_05_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:00:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-050558
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 20:01:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 20:00:21 +0000   Mon, 26 Jun 2023 19:59:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 20:00:21 +0000   Mon, 26 Jun 2023 19:59:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 20:00:21 +0000   Mon, 26 Jun 2023 19:59:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 20:00:21 +0000   Mon, 26 Jun 2023 20:00:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-050558
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3ea7387ef9741e297b6451ef059cb66
	  System UUID:                f3ea7387-ef97-41e2-97b6-451ef059cb66
	  Boot ID:                    e6774d83-0e21-4ecd-aa8a-4ffee20375aa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-xw4h2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5d78c9869d-5wffn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 etcd-multinode-050558                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-vjpzs                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      63s
	  kube-system                 kube-apiserver-multinode-050558             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-multinode-050558    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-67x99                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-multinode-050558             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 75s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s   kubelet          Node multinode-050558 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s   kubelet          Node multinode-050558 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s   kubelet          Node multinode-050558 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node multinode-050558 event: Registered Node multinode-050558 in Controller
	  Normal  NodeReady                58s   kubelet          Node multinode-050558 status is now: NodeReady
	
	
	Name:               multinode-050558-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-050558-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:00:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-050558-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 20:01:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 20:01:09 +0000   Mon, 26 Jun 2023 20:00:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 20:01:09 +0000   Mon, 26 Jun 2023 20:00:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 20:01:09 +0000   Mon, 26 Jun 2023 20:00:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 20:01:09 +0000   Mon, 26 Jun 2023 20:01:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-050558-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6af71eae9f2c494a98e3c7c6d80044ef
	  System UUID:                6af71eae-9f2c-494a-98e3-c7c6d80044ef
	  Boot ID:                    e9f7895a-b9f7-4f9a-9ba7-7e3bdd64ea3c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-z697w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-kmcqm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20s
	  kube-system                 kube-proxy-wwg6x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  20s (x5 over 21s)  kubelet          Node multinode-050558-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x5 over 21s)  kubelet          Node multinode-050558-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x5 over 21s)  kubelet          Node multinode-050558-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node multinode-050558-m02 event: Registered Node multinode-050558-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-050558-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jun26 19:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071792] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.107637] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.243183] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150916] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.058538] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.376509] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.106414] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.143932] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.106587] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.207983] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +8.995120] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[Jun26 20:00] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[ +19.575223] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [c5176444c7bdbfbfe435addbb4f11e1b79266ac7261225f07aa8746ae9d059cf] <==
	* {"level":"info","ts":"2023-06-26T19:59:58.905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 switched to configuration voters=(13286884612305677681)"}
	{"level":"info","ts":"2023-06-26T19:59:58.905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","added-peer-id":"b8647f2870156d71","added-peer-peer-urls":["https://192.168.39.229:2380"]}
	{"level":"info","ts":"2023-06-26T19:59:58.919Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-26T19:59:58.922Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-26T19:59:58.922Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-26T19:59:58.920Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2023-06-26T19:59:58.922Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 1"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 2"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 2"}
	{"level":"info","ts":"2023-06-26T19:59:59.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2023-06-26T19:59:59.270Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:multinode-050558 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-26T19:59:59.270Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T19:59:59.271Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T19:59:59.272Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2023-06-26T19:59:59.272Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-26T19:59:59.272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-26T19:59:59.272Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T19:59:59.277Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-26T19:59:59.278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T19:59:59.278Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T19:59:59.278Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  20:01:19 up 1 min,  0 users,  load average: 0.37, 0.16, 0.06
	Linux multinode-050558 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [6e557f9c77832178874f9096a66b2799b51edb8f433b4dc4094d76b084deed43] <==
	* I0626 20:00:21.187246       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0626 20:00:21.187364       1 main.go:107] hostIP = 192.168.39.229
	podIP = 192.168.39.229
	I0626 20:00:21.187721       1 main.go:116] setting mtu 1500 for CNI 
	I0626 20:00:21.187763       1 main.go:146] kindnetd IP family: "ipv4"
	I0626 20:00:21.187787       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0626 20:00:21.595301       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:00:21.595388       1 main.go:227] handling current node
	I0626 20:00:31.609226       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:00:31.609333       1 main.go:227] handling current node
	I0626 20:00:41.620022       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:00:41.620361       1 main.go:227] handling current node
	I0626 20:00:51.625434       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:00:51.625493       1 main.go:227] handling current node
	I0626 20:01:01.639090       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:01:01.639196       1 main.go:227] handling current node
	I0626 20:01:01.639210       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I0626 20:01:01.639216       1 main.go:250] Node multinode-050558-m02 has CIDR [10.244.1.0/24] 
	I0626 20:01:01.639519       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.133 Flags: [] Table: 0} 
	I0626 20:01:11.652442       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:01:11.652501       1 main.go:227] handling current node
	I0626 20:01:11.652516       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I0626 20:01:11.652523       1 main.go:250] Node multinode-050558-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f] <==
	* I0626 20:00:01.149551       1 shared_informer.go:318] Caches are synced for configmaps
	I0626 20:00:01.149789       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0626 20:00:01.150052       1 aggregator.go:152] initial CRD sync complete...
	I0626 20:00:01.150092       1 autoregister_controller.go:141] Starting autoregister controller
	I0626 20:00:01.150108       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0626 20:00:01.150173       1 cache.go:39] Caches are synced for autoregister controller
	I0626 20:00:01.155625       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0626 20:00:01.160286       1 controller.go:624] quota admission added evaluator for: namespaces
	I0626 20:00:01.224845       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0626 20:00:01.757446       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0626 20:00:02.043731       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0626 20:00:02.049043       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0626 20:00:02.049084       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0626 20:00:02.726482       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0626 20:00:02.789931       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0626 20:00:02.868989       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0626 20:00:02.876987       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.229]
	I0626 20:00:02.877972       1 controller.go:624] quota admission added evaluator for: endpoints
	I0626 20:00:02.882573       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0626 20:00:03.124700       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0626 20:00:04.708789       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0626 20:00:04.733999       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0626 20:00:04.750500       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0626 20:00:16.780322       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0626 20:00:16.858076       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [56b2c6a2c5f6b92a79b3ed8e079e299984914e7272f71c3b4611108e7918fce7] <==
	* I0626 20:00:16.081896       1 shared_informer.go:318] Caches are synced for resource quota
	I0626 20:00:16.085196       1 shared_informer.go:318] Caches are synced for resource quota
	I0626 20:00:16.086298       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0626 20:00:16.124551       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0626 20:00:16.517787       1 shared_informer.go:318] Caches are synced for garbage collector
	I0626 20:00:16.575375       1 shared_informer.go:318] Caches are synced for garbage collector
	I0626 20:00:16.575398       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0626 20:00:16.800330       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vjpzs"
	I0626 20:00:16.809429       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-67x99"
	I0626 20:00:16.897327       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0626 20:00:17.008773       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-5wffn"
	I0626 20:00:17.037032       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0626 20:00:17.046974       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-fs5jf"
	I0626 20:00:17.202051       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-fs5jf"
	I0626 20:00:26.025739       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0626 20:00:59.768322       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-050558-m02\" does not exist"
	I0626 20:00:59.787530       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-050558-m02" podCIDRs=[10.244.1.0/24]
	I0626 20:00:59.818508       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kmcqm"
	I0626 20:00:59.825724       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wwg6x"
	I0626 20:01:01.031966       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-050558-m02"
	I0626 20:01:01.032329       1 event.go:307] "Event occurred" object="multinode-050558-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-050558-m02 event: Registered Node multinode-050558-m02 in Controller"
	W0626 20:01:09.530319       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m02 node
	I0626 20:01:11.790383       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0626 20:01:11.808498       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-z697w"
	I0626 20:01:11.822439       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-xw4h2"
	
	* 
	* ==> kube-proxy [5db2185f6d0567e402b4289732d88ea5871fe274f49d70f53cd8776ccc3a127f] <==
	* I0626 20:00:18.616905       1 node.go:141] Successfully retrieved node IP: 192.168.39.229
	I0626 20:00:18.616993       1 server_others.go:110] "Detected node IP" address="192.168.39.229"
	I0626 20:00:18.617015       1 server_others.go:554] "Using iptables proxy"
	I0626 20:00:18.662113       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:00:18.662263       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:00:18.662950       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:00:18.664315       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:00:18.664366       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:00:18.666692       1 config.go:188] "Starting service config controller"
	I0626 20:00:18.666936       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:00:18.667237       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:00:18.667271       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:00:18.669372       1 config.go:315] "Starting node config controller"
	I0626 20:00:18.669407       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:00:18.768280       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:00:18.768547       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:00:18.769518       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [392d50d8b2da7bd0e7a614a36657a5e2d933fe25318986cac9183a2e661ddd73] <==
	* W0626 20:00:01.233876       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:00:01.233885       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0626 20:00:01.233936       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 20:00:01.233974       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0626 20:00:01.233942       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:00:01.233990       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:00:01.234038       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:00:01.234046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:00:02.032482       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:00:02.032532       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 20:00:02.143775       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:00:02.143876       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:00:02.247242       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:00:02.247266       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 20:00:02.275636       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:00:02.275719       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0626 20:00:02.302215       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:00:02.302273       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 20:00:02.310738       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 20:00:02.310990       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0626 20:00:02.447034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:00:02.447294       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0626 20:00:02.448732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:00:02.448780       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0626 20:00:04.600377       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 19:59:34 UTC, ends at Mon 2023-06-26 20:01:19 UTC. --
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921004    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn6nk\" (UniqueName: \"kubernetes.io/projected/7ffa817a-1b4a-41a1-9a56-5c65849dc57e-kube-api-access-tn6nk\") pod \"kube-proxy-67x99\" (UID: \"7ffa817a-1b4a-41a1-9a56-5c65849dc57e\") " pod="kube-system/kube-proxy-67x99"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921101    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/695a59a7-ddfd-4f5f-8084-86279daa17b6-lib-modules\") pod \"kindnet-vjpzs\" (UID: \"695a59a7-ddfd-4f5f-8084-86279daa17b6\") " pod="kube-system/kindnet-vjpzs"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921192    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs2t2\" (UniqueName: \"kubernetes.io/projected/695a59a7-ddfd-4f5f-8084-86279daa17b6-kube-api-access-fs2t2\") pod \"kindnet-vjpzs\" (UID: \"695a59a7-ddfd-4f5f-8084-86279daa17b6\") " pod="kube-system/kindnet-vjpzs"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921215    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ffa817a-1b4a-41a1-9a56-5c65849dc57e-kube-proxy\") pod \"kube-proxy-67x99\" (UID: \"7ffa817a-1b4a-41a1-9a56-5c65849dc57e\") " pod="kube-system/kube-proxy-67x99"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921235    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/695a59a7-ddfd-4f5f-8084-86279daa17b6-cni-cfg\") pod \"kindnet-vjpzs\" (UID: \"695a59a7-ddfd-4f5f-8084-86279daa17b6\") " pod="kube-system/kindnet-vjpzs"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921253    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ffa817a-1b4a-41a1-9a56-5c65849dc57e-lib-modules\") pod \"kube-proxy-67x99\" (UID: \"7ffa817a-1b4a-41a1-9a56-5c65849dc57e\") " pod="kube-system/kube-proxy-67x99"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921273    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/695a59a7-ddfd-4f5f-8084-86279daa17b6-xtables-lock\") pod \"kindnet-vjpzs\" (UID: \"695a59a7-ddfd-4f5f-8084-86279daa17b6\") " pod="kube-system/kindnet-vjpzs"
	Jun 26 20:00:16 multinode-050558 kubelet[1272]: I0626 20:00:16.921291    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ffa817a-1b4a-41a1-9a56-5c65849dc57e-xtables-lock\") pod \"kube-proxy-67x99\" (UID: \"7ffa817a-1b4a-41a1-9a56-5c65849dc57e\") " pod="kube-system/kube-proxy-67x99"
	Jun 26 20:00:21 multinode-050558 kubelet[1272]: I0626 20:00:21.017214    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-67x99" podStartSLOduration=5.017112035 podCreationTimestamp="2023-06-26 20:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 20:00:19.008194973 +0000 UTC m=+14.340618804" watchObservedRunningTime="2023-06-26 20:00:21.017112035 +0000 UTC m=+16.349535920"
	Jun 26 20:00:21 multinode-050558 kubelet[1272]: I0626 20:00:21.942346    1272 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jun 26 20:00:21 multinode-050558 kubelet[1272]: I0626 20:00:21.984371    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vjpzs" podStartSLOduration=5.984340731 podCreationTimestamp="2023-06-26 20:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 20:00:21.018269466 +0000 UTC m=+16.350693295" watchObservedRunningTime="2023-06-26 20:00:21.984340731 +0000 UTC m=+17.316764562"
	Jun 26 20:00:21 multinode-050558 kubelet[1272]: I0626 20:00:21.984483    1272 topology_manager.go:212] "Topology Admit Handler"
	Jun 26 20:00:21 multinode-050558 kubelet[1272]: I0626 20:00:21.994967    1272 topology_manager.go:212] "Topology Admit Handler"
	Jun 26 20:00:22 multinode-050558 kubelet[1272]: I0626 20:00:22.158758    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5-config-volume\") pod \"coredns-5d78c9869d-5wffn\" (UID: \"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5\") " pod="kube-system/coredns-5d78c9869d-5wffn"
	Jun 26 20:00:22 multinode-050558 kubelet[1272]: I0626 20:00:22.158846    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lchr\" (UniqueName: \"kubernetes.io/projected/c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5-kube-api-access-5lchr\") pod \"coredns-5d78c9869d-5wffn\" (UID: \"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5\") " pod="kube-system/coredns-5d78c9869d-5wffn"
	Jun 26 20:00:22 multinode-050558 kubelet[1272]: I0626 20:00:22.158873    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rh7h\" (UniqueName: \"kubernetes.io/projected/fd433ce1-f37e-4168-930f-a93cd00821cb-kube-api-access-2rh7h\") pod \"storage-provisioner\" (UID: \"fd433ce1-f37e-4168-930f-a93cd00821cb\") " pod="kube-system/storage-provisioner"
	Jun 26 20:00:22 multinode-050558 kubelet[1272]: I0626 20:00:22.158894    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fd433ce1-f37e-4168-930f-a93cd00821cb-tmp\") pod \"storage-provisioner\" (UID: \"fd433ce1-f37e-4168-930f-a93cd00821cb\") " pod="kube-system/storage-provisioner"
	Jun 26 20:00:24 multinode-050558 kubelet[1272]: I0626 20:00:24.049963    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.049929862 podCreationTimestamp="2023-06-26 20:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 20:00:24.034677755 +0000 UTC m=+19.367101586" watchObservedRunningTime="2023-06-26 20:00:24.049929862 +0000 UTC m=+19.382353736"
	Jun 26 20:00:24 multinode-050558 kubelet[1272]: I0626 20:00:24.936098    1272 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-5wffn" podStartSLOduration=8.936063649 podCreationTimestamp="2023-06-26 20:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 20:00:24.053506394 +0000 UTC m=+19.385930226" watchObservedRunningTime="2023-06-26 20:00:24.936063649 +0000 UTC m=+20.268487488"
	Jun 26 20:01:04 multinode-050558 kubelet[1272]: E0626 20:01:04.992079    1272 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 20:01:04 multinode-050558 kubelet[1272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 20:01:04 multinode-050558 kubelet[1272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 20:01:04 multinode-050558 kubelet[1272]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 20:01:11 multinode-050558 kubelet[1272]: I0626 20:01:11.839055    1272 topology_manager.go:212] "Topology Admit Handler"
	Jun 26 20:01:12 multinode-050558 kubelet[1272]: I0626 20:01:12.008389    1272 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88pnc\" (UniqueName: \"kubernetes.io/projected/e30f039c-5595-4af7-88c3-f7b1fbb71fef-kube-api-access-88pnc\") pod \"busybox-67b7f59bb-xw4h2\" (UID: \"e30f039c-5595-4af7-88c3-f7b1fbb71fef\") " pod="default/busybox-67b7f59bb-xw4h2"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-050558 -n multinode-050558
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-050558 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (684.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-050558
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-050558
E0626 20:03:30.705460   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:04:00.824330   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-050558: exit status 82 (2m1.033744809s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-050558"  ...
	* Stopping node "multinode-050558"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-050558" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-050558 --wait=true -v=8 --alsologtostderr
E0626 20:05:23.873058   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 20:06:48.326596   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:08:30.704798   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:09:00.824078   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 20:09:53.751057   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:11:48.327062   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:13:11.371437   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:13:30.705658   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:14:00.824584   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-050558 --wait=true -v=8 --alsologtostderr: (9m20.273550705s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-050558
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-050558 -n multinode-050558
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-050558 logs -n 25: (1.536046923s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp multinode-050558-m02:/home/docker/cp-test.txt                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1420814181/001/cp-test_multinode-050558-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp multinode-050558-m02:/home/docker/cp-test.txt                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558:/home/docker/cp-test_multinode-050558-m02_multinode-050558.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n multinode-050558 sudo cat                                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | /home/docker/cp-test_multinode-050558-m02_multinode-050558.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp multinode-050558-m02:/home/docker/cp-test.txt                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m03:/home/docker/cp-test_multinode-050558-m02_multinode-050558-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n multinode-050558-m03 sudo cat                                   | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | /home/docker/cp-test_multinode-050558-m02_multinode-050558-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp testdata/cp-test.txt                                                | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp multinode-050558-m03:/home/docker/cp-test.txt                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1420814181/001/cp-test_multinode-050558-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp multinode-050558-m03:/home/docker/cp-test.txt                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558:/home/docker/cp-test_multinode-050558-m03_multinode-050558.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n multinode-050558 sudo cat                                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | /home/docker/cp-test_multinode-050558-m03_multinode-050558.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-050558 cp multinode-050558-m03:/home/docker/cp-test.txt                       | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m02:/home/docker/cp-test_multinode-050558-m03_multinode-050558-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n                                                                 | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | multinode-050558-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-050558 ssh -n multinode-050558-m02 sudo cat                                   | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | /home/docker/cp-test_multinode-050558-m03_multinode-050558-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-050558 node stop m03                                                          | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	| node    | multinode-050558 node start                                                             | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC | 26 Jun 23 20:02 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-050558                                                                | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC |                     |
	| stop    | -p multinode-050558                                                                     | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:02 UTC |                     |
	| start   | -p multinode-050558                                                                     | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:04 UTC | 26 Jun 23 20:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-050558                                                                | multinode-050558 | jenkins | v1.30.1 | 26 Jun 23 20:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 20:04:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 20:04:51.619760   30564 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:04:51.619955   30564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:04:51.619968   30564 out.go:309] Setting ErrFile to fd 2...
	I0626 20:04:51.619975   30564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:04:51.620120   30564 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:04:51.620698   30564 out.go:303] Setting JSON to false
	I0626 20:04:51.621607   30564 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2839,"bootTime":1687807053,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:04:51.621663   30564 start.go:137] virtualization: kvm guest
	I0626 20:04:51.624136   30564 out.go:177] * [multinode-050558] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:04:51.625565   30564 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:04:51.625618   30564 notify.go:220] Checking for updates...
	I0626 20:04:51.626892   30564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:04:51.628367   30564 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:04:51.629767   30564 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:04:51.631369   30564 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:04:51.632970   30564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:04:51.634787   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:04:51.634898   30564 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:04:51.635280   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:04:51.635335   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:04:51.650025   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0626 20:04:51.650472   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:04:51.651023   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:04:51.651040   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:04:51.651385   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:04:51.651568   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:04:51.687520   30564 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:04:51.689045   30564 start.go:297] selected driver: kvm2
	I0626 20:04:51.689065   30564 start.go:954] validating driver "kvm2" against &{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:04:51.689226   30564 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:04:51.689659   30564 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:04:51.689761   30564 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:04:51.705455   30564 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:04:51.706159   30564 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 20:04:51.706192   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:04:51.706197   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:04:51.706206   30564 start_flags.go:319] config:
	{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:04:51.706458   30564 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:04:51.709711   30564 out.go:177] * Starting control plane node multinode-050558 in cluster multinode-050558
	I0626 20:04:51.711436   30564 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:04:51.711483   30564 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 20:04:51.711503   30564 cache.go:57] Caching tarball of preloaded images
	I0626 20:04:51.711606   30564 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:04:51.711618   30564 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:04:51.711780   30564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:04:51.712084   30564 start.go:365] acquiring machines lock for multinode-050558: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:04:51.712153   30564 start.go:369] acquired machines lock for "multinode-050558" in 34.616µs
	I0626 20:04:51.712173   30564 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:04:51.712188   30564 fix.go:54] fixHost starting: 
	I0626 20:04:51.712576   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:04:51.712619   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:04:51.726652   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0626 20:04:51.727096   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:04:51.727618   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:04:51.727643   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:04:51.727976   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:04:51.728336   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:04:51.728608   30564 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 20:04:51.730335   30564 fix.go:102] recreateIfNeeded on multinode-050558: state=Running err=<nil>
	W0626 20:04:51.730369   30564 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:04:51.732343   30564 out.go:177] * Updating the running kvm2 "multinode-050558" VM ...
	I0626 20:04:51.734039   30564 machine.go:88] provisioning docker machine ...
	I0626 20:04:51.734064   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:04:51.734326   30564 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 20:04:51.734530   30564 buildroot.go:166] provisioning hostname "multinode-050558"
	I0626 20:04:51.734551   30564 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 20:04:51.734691   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:04:51.737156   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:04:51.737673   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:04:51.737704   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:04:51.737810   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:04:51.738008   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:04:51.738180   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:04:51.738320   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:04:51.738574   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:04:51.739197   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 20:04:51.739220   30564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-050558 && echo "multinode-050558" | sudo tee /etc/hostname
	I0626 20:05:10.161719   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:16.241720   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:19.313637   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:25.393676   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:28.465714   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:34.545679   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:37.617602   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:43.697716   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:46.769655   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:52.849657   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:05:55.921687   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:02.001609   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:05.073720   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:11.153630   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:14.225614   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:20.305692   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:23.377601   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:29.457631   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:32.529576   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:38.609748   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:41.681657   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:47.761734   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:50.833653   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:56.913631   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:06:59.985646   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:06.065638   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:09.137685   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:15.217619   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:18.289664   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:24.369658   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:27.441697   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:33.521644   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:36.593597   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:42.673658   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:45.745594   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:51.825663   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:07:54.897658   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:00.977696   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:04.049599   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:10.129649   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:13.201632   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:19.281609   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:22.353638   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:28.433636   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:31.505618   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:37.585709   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:40.657658   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:46.737621   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:49.809558   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:55.889659   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:08:58.961667   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:05.041636   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:08.113667   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:14.193663   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:17.265593   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:23.345614   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:26.417642   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:32.497664   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:35.569667   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:41.649667   30564 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.229:22: connect: no route to host
	I0626 20:09:44.651851   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:09:44.651908   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:09:44.653840   30564 machine.go:91] provisioned docker machine in 4m52.919767443s
	I0626 20:09:44.653884   30564 fix.go:56] fixHost completed within 4m52.941696003s
	I0626 20:09:44.653895   30564 start.go:83] releasing machines lock for "multinode-050558", held for 4m52.941733562s
	W0626 20:09:44.653924   30564 start.go:672] error starting host: provision: host is not running
	W0626 20:09:44.654017   30564 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0626 20:09:44.654029   30564 start.go:687] Will try again in 5 seconds ...
	I0626 20:09:49.656983   30564 start.go:365] acquiring machines lock for multinode-050558: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:09:49.657092   30564 start.go:369] acquired machines lock for "multinode-050558" in 62.051µs
	I0626 20:09:49.657116   30564 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:09:49.657122   30564 fix.go:54] fixHost starting: 
	I0626 20:09:49.657420   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:09:49.657440   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:09:49.671715   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0626 20:09:49.672135   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:09:49.672647   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:09:49.672668   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:09:49.672939   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:09:49.673114   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:09:49.673275   30564 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 20:09:49.674861   30564 fix.go:102] recreateIfNeeded on multinode-050558: state=Stopped err=<nil>
	I0626 20:09:49.674887   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	W0626 20:09:49.675027   30564 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:09:49.677075   30564 out.go:177] * Restarting existing kvm2 VM for "multinode-050558" ...
	I0626 20:09:49.678240   30564 main.go:141] libmachine: (multinode-050558) Calling .Start
	I0626 20:09:49.678409   30564 main.go:141] libmachine: (multinode-050558) Ensuring networks are active...
	I0626 20:09:49.679081   30564 main.go:141] libmachine: (multinode-050558) Ensuring network default is active
	I0626 20:09:49.679540   30564 main.go:141] libmachine: (multinode-050558) Ensuring network mk-multinode-050558 is active
	I0626 20:09:49.679900   30564 main.go:141] libmachine: (multinode-050558) Getting domain xml...
	I0626 20:09:49.680567   30564 main.go:141] libmachine: (multinode-050558) Creating domain...
	I0626 20:09:50.886299   30564 main.go:141] libmachine: (multinode-050558) Waiting to get IP...
	I0626 20:09:50.887111   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:50.887557   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:50.887655   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:50.887556   31353 retry.go:31] will retry after 214.911121ms: waiting for machine to come up
	I0626 20:09:51.103955   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:51.104401   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:51.104436   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:51.104337   31353 retry.go:31] will retry after 241.858993ms: waiting for machine to come up
	I0626 20:09:51.347682   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:51.348133   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:51.348169   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:51.348064   31353 retry.go:31] will retry after 437.623104ms: waiting for machine to come up
	I0626 20:09:51.787341   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:51.787734   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:51.787767   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:51.787685   31353 retry.go:31] will retry after 588.372523ms: waiting for machine to come up
	I0626 20:09:52.377055   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:52.377508   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:52.377533   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:52.377459   31353 retry.go:31] will retry after 759.293807ms: waiting for machine to come up
	I0626 20:09:53.138270   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:53.138759   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:53.138788   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:53.138713   31353 retry.go:31] will retry after 586.632452ms: waiting for machine to come up
	I0626 20:09:53.726381   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:53.726767   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:53.726792   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:53.726685   31353 retry.go:31] will retry after 846.538614ms: waiting for machine to come up
	I0626 20:09:54.574768   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:54.575225   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:54.575257   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:54.575179   31353 retry.go:31] will retry after 1.182162599s: waiting for machine to come up
	I0626 20:09:55.758871   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:55.759355   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:55.759388   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:55.759315   31353 retry.go:31] will retry after 1.126609578s: waiting for machine to come up
	I0626 20:09:56.887717   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:56.888157   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:56.888180   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:56.888140   31353 retry.go:31] will retry after 1.708774327s: waiting for machine to come up
	I0626 20:09:58.599276   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:09:58.599762   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:09:58.599791   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:09:58.599685   31353 retry.go:31] will retry after 2.392282381s: waiting for machine to come up
	I0626 20:10:00.993613   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:00.994049   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:10:00.994091   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:10:00.994004   31353 retry.go:31] will retry after 3.591475258s: waiting for machine to come up
	I0626 20:10:04.586599   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:04.586999   30564 main.go:141] libmachine: (multinode-050558) DBG | unable to find current IP address of domain multinode-050558 in network mk-multinode-050558
	I0626 20:10:04.587023   30564 main.go:141] libmachine: (multinode-050558) DBG | I0626 20:10:04.586943   31353 retry.go:31] will retry after 3.210180048s: waiting for machine to come up
	I0626 20:10:07.798540   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.798972   30564 main.go:141] libmachine: (multinode-050558) Found IP for machine: 192.168.39.229
	I0626 20:10:07.799003   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has current primary IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.799015   30564 main.go:141] libmachine: (multinode-050558) Reserving static IP address...
	I0626 20:10:07.799414   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "multinode-050558", mac: "52:54:00:b7:21:4e", ip: "192.168.39.229"} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:07.799433   30564 main.go:141] libmachine: (multinode-050558) Reserved static IP address: 192.168.39.229
	I0626 20:10:07.799449   30564 main.go:141] libmachine: (multinode-050558) DBG | skip adding static IP to network mk-multinode-050558 - found existing host DHCP lease matching {name: "multinode-050558", mac: "52:54:00:b7:21:4e", ip: "192.168.39.229"}
	I0626 20:10:07.799467   30564 main.go:141] libmachine: (multinode-050558) DBG | Getting to WaitForSSH function...
	I0626 20:10:07.799481   30564 main.go:141] libmachine: (multinode-050558) Waiting for SSH to be available...
	I0626 20:10:07.801880   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.802222   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:07.802256   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.802384   30564 main.go:141] libmachine: (multinode-050558) DBG | Using SSH client type: external
	I0626 20:10:07.802423   30564 main.go:141] libmachine: (multinode-050558) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa (-rw-------)
	I0626 20:10:07.802460   30564 main.go:141] libmachine: (multinode-050558) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.229 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:10:07.802488   30564 main.go:141] libmachine: (multinode-050558) DBG | About to run SSH command:
	I0626 20:10:07.802505   30564 main.go:141] libmachine: (multinode-050558) DBG | exit 0
	I0626 20:10:07.897342   30564 main.go:141] libmachine: (multinode-050558) DBG | SSH cmd err, output: <nil>: 
	I0626 20:10:07.897781   30564 main.go:141] libmachine: (multinode-050558) Calling .GetConfigRaw
	I0626 20:10:07.898360   30564 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 20:10:07.900500   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.900893   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:07.900926   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.901168   30564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:10:07.901350   30564 machine.go:88] provisioning docker machine ...
	I0626 20:10:07.901366   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:10:07.901571   30564 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 20:10:07.901736   30564 buildroot.go:166] provisioning hostname "multinode-050558"
	I0626 20:10:07.901755   30564 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 20:10:07.901901   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:07.903951   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.904258   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:07.904285   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:07.904383   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:07.904551   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:07.904720   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:07.904852   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:07.905003   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:07.905427   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 20:10:07.905444   30564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-050558 && echo "multinode-050558" | sudo tee /etc/hostname
	I0626 20:10:08.045908   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-050558
	
	I0626 20:10:08.045933   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:08.048571   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.048930   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:08.048950   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.049140   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:08.049329   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:08.049525   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:08.049682   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:08.049895   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:08.050494   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 20:10:08.050522   30564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-050558' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-050558/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-050558' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:10:08.190211   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:10:08.190241   30564 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:10:08.190280   30564 buildroot.go:174] setting up certificates
	I0626 20:10:08.190288   30564 provision.go:83] configureAuth start
	I0626 20:10:08.190298   30564 main.go:141] libmachine: (multinode-050558) Calling .GetMachineName
	I0626 20:10:08.190550   30564 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 20:10:08.193019   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.193386   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:08.193448   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.193517   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:08.195498   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.195934   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:08.195966   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.196131   30564 provision.go:138] copyHostCerts
	I0626 20:10:08.196161   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:10:08.196215   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:10:08.196228   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:10:08.196302   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:10:08.196421   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:10:08.196454   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:10:08.196465   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:10:08.196514   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:10:08.196590   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:10:08.196614   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:10:08.196623   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:10:08.196658   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:10:08.196733   30564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.multinode-050558 san=[192.168.39.229 192.168.39.229 localhost 127.0.0.1 minikube multinode-050558]
	I0626 20:10:08.459134   30564 provision.go:172] copyRemoteCerts
	I0626 20:10:08.459185   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:10:08.459210   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:08.461726   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.462034   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:08.462067   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.462220   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:08.462424   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:08.462567   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:08.462707   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:10:08.554062   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 20:10:08.554131   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:10:08.577153   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 20:10:08.577206   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0626 20:10:08.599912   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 20:10:08.599964   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:10:08.623302   30564 provision.go:86] duration metric: configureAuth took 432.989888ms
	I0626 20:10:08.623330   30564 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:10:08.623555   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:10:08.623636   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:08.626202   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.626472   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:08.626502   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.626702   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:08.626876   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:08.627019   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:08.627169   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:08.627311   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:08.627694   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 20:10:08.627714   30564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:10:08.954076   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:10:08.954095   30564 machine.go:91] provisioned docker machine in 1.052734179s
	I0626 20:10:08.954104   30564 start.go:300] post-start starting for "multinode-050558" (driver="kvm2")
	I0626 20:10:08.954112   30564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:10:08.954127   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:10:08.954492   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:10:08.954518   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:08.957645   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.958053   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:08.958091   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:08.958220   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:08.958444   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:08.958614   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:08.958757   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:10:09.051061   30564 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:10:09.055314   30564 command_runner.go:130] > NAME=Buildroot
	I0626 20:10:09.055337   30564 command_runner.go:130] > VERSION=2021.02.12-1-ge2e95ab-dirty
	I0626 20:10:09.055342   30564 command_runner.go:130] > ID=buildroot
	I0626 20:10:09.055347   30564 command_runner.go:130] > VERSION_ID=2021.02.12
	I0626 20:10:09.055352   30564 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0626 20:10:09.055541   30564 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:10:09.055562   30564 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:10:09.055636   30564 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:10:09.055770   30564 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:10:09.055782   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /etc/ssl/certs/144432.pem
	I0626 20:10:09.055883   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:10:09.064303   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:10:09.087001   30564 start.go:303] post-start completed in 132.884615ms
	I0626 20:10:09.087027   30564 fix.go:56] fixHost completed within 19.429905003s
	I0626 20:10:09.087047   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:09.089530   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.089886   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:09.089911   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.090099   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:09.090299   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:09.090471   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:09.090632   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:09.090787   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:09.091195   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0626 20:10:09.091211   30564 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:10:09.222405   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687810209.170860260
	
	I0626 20:10:09.222426   30564 fix.go:206] guest clock: 1687810209.170860260
	I0626 20:10:09.222436   30564 fix.go:219] Guest: 2023-06-26 20:10:09.17086026 +0000 UTC Remote: 2023-06-26 20:10:09.087030089 +0000 UTC m=+317.501104486 (delta=83.830171ms)
	I0626 20:10:09.222457   30564 fix.go:190] guest clock delta is within tolerance: 83.830171ms
	I0626 20:10:09.222463   30564 start.go:83] releasing machines lock for "multinode-050558", held for 19.565358958s
	I0626 20:10:09.222489   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:10:09.222712   30564 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 20:10:09.225290   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.225720   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:09.225746   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.225902   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:10:09.226383   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:10:09.226554   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:10:09.226625   30564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:10:09.226664   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:09.226769   30564 ssh_runner.go:195] Run: cat /version.json
	I0626 20:10:09.226798   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:10:09.228895   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.229186   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.229220   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:09.229257   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.229409   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:09.229562   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:09.229701   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:09.229802   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:09.229828   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:09.229822   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:10:09.229995   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:10:09.230175   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:10:09.230327   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:10:09.230493   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:10:09.340808   30564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 20:10:09.340876   30564 command_runner.go:130] > {"iso_version": "v1.30.1-1687455737-16703", "kicbase_version": "v0.0.39-1687367788-16703", "minikube_version": "v1.30.1", "commit": "698b58f2be1e4f36ba4ac648454cf7f7b59eb6ea"}
	I0626 20:10:09.341005   30564 ssh_runner.go:195] Run: systemctl --version
	I0626 20:10:09.346469   30564 command_runner.go:130] > systemd 247 (247)
	I0626 20:10:09.346514   30564 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0626 20:10:09.346757   30564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:10:09.489208   30564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 20:10:09.495514   30564 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0626 20:10:09.496041   30564 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:10:09.496106   30564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:10:09.510227   30564 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0626 20:10:09.510280   30564 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:10:09.510294   30564 start.go:466] detecting cgroup driver to use...
	I0626 20:10:09.510349   30564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:10:09.523468   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:10:09.536327   30564 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:10:09.536393   30564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:10:09.550080   30564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:10:09.563929   30564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:10:09.577565   30564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0626 20:10:09.673844   30564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:10:09.793009   30564 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0626 20:10:09.793050   30564 docker.go:212] disabling docker service ...
	I0626 20:10:09.793104   30564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:10:09.806053   30564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:10:09.817796   30564 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0626 20:10:09.817900   30564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:10:09.928293   30564 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0626 20:10:09.928415   30564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:10:10.037469   30564 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0626 20:10:10.037491   30564 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0626 20:10:10.037600   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:10:10.050562   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:10:10.068817   30564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 20:10:10.068852   30564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:10:10.068892   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:10:10.079157   30564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:10:10.079213   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:10:10.088720   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:10:10.097521   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:10:10.107291   30564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:10:10.116489   30564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:10:10.124904   30564 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:10:10.125089   30564 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:10:10.125127   30564 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:10:10.136988   30564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:10:10.146365   30564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:10:10.260457   30564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:10:10.429133   30564 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:10:10.429208   30564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:10:10.434244   30564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 20:10:10.434262   30564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 20:10:10.434269   30564 command_runner.go:130] > Device: 16h/22d	Inode: 759         Links: 1
	I0626 20:10:10.434276   30564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:10:10.434280   30564 command_runner.go:130] > Access: 2023-06-26 20:10:10.364403478 +0000
	I0626 20:10:10.434286   30564 command_runner.go:130] > Modify: 2023-06-26 20:10:10.364403478 +0000
	I0626 20:10:10.434290   30564 command_runner.go:130] > Change: 2023-06-26 20:10:10.364403478 +0000
	I0626 20:10:10.434294   30564 command_runner.go:130] >  Birth: -
	I0626 20:10:10.434387   30564 start.go:534] Will wait 60s for crictl version
	I0626 20:10:10.434427   30564 ssh_runner.go:195] Run: which crictl
	I0626 20:10:10.438047   30564 command_runner.go:130] > /usr/bin/crictl
	I0626 20:10:10.438104   30564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:10:10.474522   30564 command_runner.go:130] > Version:  0.1.0
	I0626 20:10:10.474543   30564 command_runner.go:130] > RuntimeName:  cri-o
	I0626 20:10:10.474547   30564 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0626 20:10:10.474553   30564 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0626 20:10:10.474567   30564 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:10:10.474641   30564 ssh_runner.go:195] Run: crio --version
	I0626 20:10:10.523345   30564 command_runner.go:130] > crio version 1.24.1
	I0626 20:10:10.523373   30564 command_runner.go:130] > Version:          1.24.1
	I0626 20:10:10.523380   30564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:10:10.523385   30564 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:10:10.523394   30564 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:10:10.523399   30564 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:10:10.523405   30564 command_runner.go:130] > Compiler:         gc
	I0626 20:10:10.523413   30564 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:10:10.523420   30564 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:10:10.523438   30564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:10:10.523471   30564 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:10:10.523483   30564 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:10:10.523552   30564 ssh_runner.go:195] Run: crio --version
	I0626 20:10:10.566488   30564 command_runner.go:130] > crio version 1.24.1
	I0626 20:10:10.566507   30564 command_runner.go:130] > Version:          1.24.1
	I0626 20:10:10.566531   30564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:10:10.566536   30564 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:10:10.566542   30564 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:10:10.566546   30564 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:10:10.566550   30564 command_runner.go:130] > Compiler:         gc
	I0626 20:10:10.566554   30564 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:10:10.566565   30564 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:10:10.566575   30564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:10:10.566582   30564 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:10:10.566592   30564 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:10:10.569879   30564 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:10:10.571368   30564 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 20:10:10.573697   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:10.574095   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:10:10.574127   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:10:10.574256   30564 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:10:10.578629   30564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:10:10.591827   30564 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:10:10.591894   30564 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:10:10.620841   30564 command_runner.go:130] > {
	I0626 20:10:10.620861   30564 command_runner.go:130] >   "images": [
	I0626 20:10:10.620867   30564 command_runner.go:130] >     {
	I0626 20:10:10.620879   30564 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0626 20:10:10.620886   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:10.620923   30564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0626 20:10:10.620934   30564 command_runner.go:130] >       ],
	I0626 20:10:10.620940   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:10.620955   30564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0626 20:10:10.620965   30564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0626 20:10:10.620971   30564 command_runner.go:130] >       ],
	I0626 20:10:10.620976   30564 command_runner.go:130] >       "size": "750414",
	I0626 20:10:10.620982   30564 command_runner.go:130] >       "uid": {
	I0626 20:10:10.620988   30564 command_runner.go:130] >         "value": "65535"
	I0626 20:10:10.620994   30564 command_runner.go:130] >       },
	I0626 20:10:10.621005   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:10.621015   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:10.621028   30564 command_runner.go:130] >     }
	I0626 20:10:10.621035   30564 command_runner.go:130] >   ]
	I0626 20:10:10.621040   30564 command_runner.go:130] > }
	I0626 20:10:10.622248   30564 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:10:10.622320   30564 ssh_runner.go:195] Run: which lz4
	I0626 20:10:10.626153   30564 command_runner.go:130] > /usr/bin/lz4
	I0626 20:10:10.626182   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0626 20:10:10.626248   30564 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:10:10.630377   30564 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:10:10.630432   30564 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:10:10.630453   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:10:12.340640   30564 crio.go:444] Took 1.714409 seconds to copy over tarball
	I0626 20:10:12.340696   30564 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:10:15.256569   30564 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.915846432s)
	I0626 20:10:15.256599   30564 crio.go:451] Took 2.915939 seconds to extract the tarball
	I0626 20:10:15.256607   30564 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:10:15.295688   30564 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:10:15.338924   30564 command_runner.go:130] > {
	I0626 20:10:15.338950   30564 command_runner.go:130] >   "images": [
	I0626 20:10:15.338956   30564 command_runner.go:130] >     {
	I0626 20:10:15.338967   30564 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0626 20:10:15.338980   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.338988   30564 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0626 20:10:15.338995   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339007   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339022   30564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0626 20:10:15.339038   30564 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0626 20:10:15.339045   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339053   30564 command_runner.go:130] >       "size": "65249302",
	I0626 20:10:15.339063   30564 command_runner.go:130] >       "uid": null,
	I0626 20:10:15.339071   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.339084   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.339093   30564 command_runner.go:130] >     },
	I0626 20:10:15.339100   30564 command_runner.go:130] >     {
	I0626 20:10:15.339111   30564 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0626 20:10:15.339118   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.339128   30564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0626 20:10:15.339134   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339142   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339158   30564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0626 20:10:15.339174   30564 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0626 20:10:15.339183   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339202   30564 command_runner.go:130] >       "size": "31470524",
	I0626 20:10:15.339213   30564 command_runner.go:130] >       "uid": null,
	I0626 20:10:15.339228   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.339236   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.339243   30564 command_runner.go:130] >     },
	I0626 20:10:15.339249   30564 command_runner.go:130] >     {
	I0626 20:10:15.339263   30564 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0626 20:10:15.339273   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.339286   30564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0626 20:10:15.339295   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339306   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339321   30564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0626 20:10:15.339337   30564 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0626 20:10:15.339346   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339354   30564 command_runner.go:130] >       "size": "53621675",
	I0626 20:10:15.339364   30564 command_runner.go:130] >       "uid": null,
	I0626 20:10:15.339372   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.339382   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.339393   30564 command_runner.go:130] >     },
	I0626 20:10:15.339402   30564 command_runner.go:130] >     {
	I0626 20:10:15.339413   30564 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0626 20:10:15.339422   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.339432   30564 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0626 20:10:15.339441   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339449   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339464   30564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0626 20:10:15.339475   30564 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0626 20:10:15.339482   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339492   30564 command_runner.go:130] >       "size": "297083935",
	I0626 20:10:15.339499   30564 command_runner.go:130] >       "uid": {
	I0626 20:10:15.339509   30564 command_runner.go:130] >         "value": "0"
	I0626 20:10:15.339527   30564 command_runner.go:130] >       },
	I0626 20:10:15.339537   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.339548   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.339554   30564 command_runner.go:130] >     },
	I0626 20:10:15.339563   30564 command_runner.go:130] >     {
	I0626 20:10:15.339577   30564 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0626 20:10:15.339588   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.339600   30564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0626 20:10:15.339609   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339616   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339632   30564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0626 20:10:15.339647   30564 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0626 20:10:15.339655   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339663   30564 command_runner.go:130] >       "size": "122065872",
	I0626 20:10:15.339673   30564 command_runner.go:130] >       "uid": {
	I0626 20:10:15.339682   30564 command_runner.go:130] >         "value": "0"
	I0626 20:10:15.339689   30564 command_runner.go:130] >       },
	I0626 20:10:15.339699   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.339708   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.339715   30564 command_runner.go:130] >     },
	I0626 20:10:15.339724   30564 command_runner.go:130] >     {
	I0626 20:10:15.339735   30564 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0626 20:10:15.339745   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.339761   30564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0626 20:10:15.339770   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339782   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339796   30564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0626 20:10:15.339812   30564 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0626 20:10:15.339821   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339830   30564 command_runner.go:130] >       "size": "113919286",
	I0626 20:10:15.339840   30564 command_runner.go:130] >       "uid": {
	I0626 20:10:15.339851   30564 command_runner.go:130] >         "value": "0"
	I0626 20:10:15.339858   30564 command_runner.go:130] >       },
	I0626 20:10:15.339865   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.339871   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.339880   30564 command_runner.go:130] >     },
	I0626 20:10:15.339887   30564 command_runner.go:130] >     {
	I0626 20:10:15.339902   30564 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0626 20:10:15.339912   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.339923   30564 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0626 20:10:15.339930   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339942   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.339958   30564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0626 20:10:15.339979   30564 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0626 20:10:15.339988   30564 command_runner.go:130] >       ],
	I0626 20:10:15.339996   30564 command_runner.go:130] >       "size": "72713623",
	I0626 20:10:15.340006   30564 command_runner.go:130] >       "uid": null,
	I0626 20:10:15.340015   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.340023   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.340031   30564 command_runner.go:130] >     },
	I0626 20:10:15.340038   30564 command_runner.go:130] >     {
	I0626 20:10:15.340052   30564 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0626 20:10:15.340061   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.340071   30564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0626 20:10:15.340080   30564 command_runner.go:130] >       ],
	I0626 20:10:15.340088   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.340105   30564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0626 20:10:15.340178   30564 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0626 20:10:15.340187   30564 command_runner.go:130] >       ],
	I0626 20:10:15.340198   30564 command_runner.go:130] >       "size": "59811126",
	I0626 20:10:15.340203   30564 command_runner.go:130] >       "uid": {
	I0626 20:10:15.340210   30564 command_runner.go:130] >         "value": "0"
	I0626 20:10:15.340219   30564 command_runner.go:130] >       },
	I0626 20:10:15.340226   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.340235   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.340242   30564 command_runner.go:130] >     },
	I0626 20:10:15.340251   30564 command_runner.go:130] >     {
	I0626 20:10:15.340262   30564 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0626 20:10:15.340272   30564 command_runner.go:130] >       "repoTags": [
	I0626 20:10:15.340283   30564 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0626 20:10:15.340291   30564 command_runner.go:130] >       ],
	I0626 20:10:15.340299   30564 command_runner.go:130] >       "repoDigests": [
	I0626 20:10:15.340314   30564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0626 20:10:15.340329   30564 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0626 20:10:15.340338   30564 command_runner.go:130] >       ],
	I0626 20:10:15.340347   30564 command_runner.go:130] >       "size": "750414",
	I0626 20:10:15.340357   30564 command_runner.go:130] >       "uid": {
	I0626 20:10:15.340370   30564 command_runner.go:130] >         "value": "65535"
	I0626 20:10:15.340379   30564 command_runner.go:130] >       },
	I0626 20:10:15.340387   30564 command_runner.go:130] >       "username": "",
	I0626 20:10:15.340396   30564 command_runner.go:130] >       "spec": null
	I0626 20:10:15.340402   30564 command_runner.go:130] >     }
	I0626 20:10:15.340409   30564 command_runner.go:130] >   ]
	I0626 20:10:15.340417   30564 command_runner.go:130] > }
	I0626 20:10:15.340535   30564 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:10:15.340546   30564 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:10:15.340617   30564 ssh_runner.go:195] Run: crio config
	I0626 20:10:15.405554   30564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 20:10:15.405582   30564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 20:10:15.405594   30564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 20:10:15.405599   30564 command_runner.go:130] > #
	I0626 20:10:15.405609   30564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 20:10:15.405620   30564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 20:10:15.405634   30564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 20:10:15.405652   30564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 20:10:15.405662   30564 command_runner.go:130] > # reload'.
	I0626 20:10:15.405677   30564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 20:10:15.405691   30564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 20:10:15.405705   30564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 20:10:15.405718   30564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 20:10:15.405727   30564 command_runner.go:130] > [crio]
	I0626 20:10:15.405739   30564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 20:10:15.405751   30564 command_runner.go:130] > # containers images, in this directory.
	I0626 20:10:15.405762   30564 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0626 20:10:15.405780   30564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 20:10:15.405792   30564 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0626 20:10:15.405806   30564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 20:10:15.405819   30564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 20:10:15.405828   30564 command_runner.go:130] > storage_driver = "overlay"
	I0626 20:10:15.405840   30564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 20:10:15.405853   30564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 20:10:15.405864   30564 command_runner.go:130] > storage_option = [
	I0626 20:10:15.405875   30564 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0626 20:10:15.405884   30564 command_runner.go:130] > ]
	I0626 20:10:15.405902   30564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 20:10:15.405915   30564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 20:10:15.405923   30564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 20:10:15.405936   30564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 20:10:15.405950   30564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 20:10:15.405960   30564 command_runner.go:130] > # always happen on a node reboot
	I0626 20:10:15.405971   30564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 20:10:15.405980   30564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 20:10:15.405993   30564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 20:10:15.406014   30564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 20:10:15.406026   30564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 20:10:15.406041   30564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 20:10:15.406058   30564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 20:10:15.406068   30564 command_runner.go:130] > # internal_wipe = true
	I0626 20:10:15.406080   30564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 20:10:15.406093   30564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 20:10:15.406103   30564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 20:10:15.406115   30564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 20:10:15.406131   30564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 20:10:15.406141   30564 command_runner.go:130] > [crio.api]
	I0626 20:10:15.406152   30564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 20:10:15.406163   30564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 20:10:15.406175   30564 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 20:10:15.406186   30564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 20:10:15.406197   30564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 20:10:15.406209   30564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 20:10:15.406219   30564 command_runner.go:130] > # stream_port = "0"
	I0626 20:10:15.406232   30564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 20:10:15.406242   30564 command_runner.go:130] > # stream_enable_tls = false
	I0626 20:10:15.406253   30564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 20:10:15.406263   30564 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 20:10:15.406274   30564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 20:10:15.406288   30564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 20:10:15.406298   30564 command_runner.go:130] > # minutes.
	I0626 20:10:15.406306   30564 command_runner.go:130] > # stream_tls_cert = ""
	I0626 20:10:15.406319   30564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 20:10:15.406334   30564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 20:10:15.406344   30564 command_runner.go:130] > # stream_tls_key = ""
	I0626 20:10:15.406354   30564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 20:10:15.406368   30564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 20:10:15.406379   30564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 20:10:15.406389   30564 command_runner.go:130] > # stream_tls_ca = ""
	I0626 20:10:15.406405   30564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:10:15.406415   30564 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0626 20:10:15.406428   30564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:10:15.406438   30564 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0626 20:10:15.406490   30564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 20:10:15.406501   30564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 20:10:15.406507   30564 command_runner.go:130] > [crio.runtime]
	I0626 20:10:15.406517   30564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 20:10:15.406529   30564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 20:10:15.406536   30564 command_runner.go:130] > # "nofile=1024:2048"
	I0626 20:10:15.406550   30564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 20:10:15.406560   30564 command_runner.go:130] > # default_ulimits = [
	I0626 20:10:15.406572   30564 command_runner.go:130] > # ]
	I0626 20:10:15.406586   30564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 20:10:15.406596   30564 command_runner.go:130] > # no_pivot = false
	I0626 20:10:15.406609   30564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 20:10:15.406623   30564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 20:10:15.406634   30564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 20:10:15.406647   30564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 20:10:15.406659   30564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 20:10:15.406672   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:10:15.406683   30564 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0626 20:10:15.406695   30564 command_runner.go:130] > # Cgroup setting for conmon
	I0626 20:10:15.406709   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 20:10:15.406720   30564 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 20:10:15.406733   30564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 20:10:15.406745   30564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 20:10:15.406759   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:10:15.406768   30564 command_runner.go:130] > conmon_env = [
	I0626 20:10:15.406782   30564 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0626 20:10:15.406793   30564 command_runner.go:130] > ]
	I0626 20:10:15.406804   30564 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 20:10:15.406815   30564 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 20:10:15.406828   30564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 20:10:15.406838   30564 command_runner.go:130] > # default_env = [
	I0626 20:10:15.406847   30564 command_runner.go:130] > # ]
	I0626 20:10:15.406857   30564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 20:10:15.406867   30564 command_runner.go:130] > # selinux = false
	I0626 20:10:15.406878   30564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 20:10:15.406892   30564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 20:10:15.406902   30564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 20:10:15.406912   30564 command_runner.go:130] > # seccomp_profile = ""
	I0626 20:10:15.406923   30564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 20:10:15.406936   30564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 20:10:15.406950   30564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 20:10:15.406961   30564 command_runner.go:130] > # which might increase security.
	I0626 20:10:15.406971   30564 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0626 20:10:15.406982   30564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 20:10:15.406998   30564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 20:10:15.407012   30564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 20:10:15.407025   30564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 20:10:15.407037   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:10:15.407048   30564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 20:10:15.407061   30564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 20:10:15.407072   30564 command_runner.go:130] > # the cgroup blockio controller.
	I0626 20:10:15.407082   30564 command_runner.go:130] > # blockio_config_file = ""
	I0626 20:10:15.407092   30564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 20:10:15.407102   30564 command_runner.go:130] > # irqbalance daemon.
	I0626 20:10:15.407113   30564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 20:10:15.407127   30564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 20:10:15.407138   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:10:15.407148   30564 command_runner.go:130] > # rdt_config_file = ""
	I0626 20:10:15.407159   30564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 20:10:15.407168   30564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 20:10:15.407179   30564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 20:10:15.407190   30564 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 20:10:15.407206   30564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 20:10:15.407220   30564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 20:10:15.407229   30564 command_runner.go:130] > # will be added.
	I0626 20:10:15.407240   30564 command_runner.go:130] > # default_capabilities = [
	I0626 20:10:15.407247   30564 command_runner.go:130] > # 	"CHOWN",
	I0626 20:10:15.407254   30564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 20:10:15.407260   30564 command_runner.go:130] > # 	"FSETID",
	I0626 20:10:15.407270   30564 command_runner.go:130] > # 	"FOWNER",
	I0626 20:10:15.407278   30564 command_runner.go:130] > # 	"SETGID",
	I0626 20:10:15.407288   30564 command_runner.go:130] > # 	"SETUID",
	I0626 20:10:15.407296   30564 command_runner.go:130] > # 	"SETPCAP",
	I0626 20:10:15.407306   30564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 20:10:15.407313   30564 command_runner.go:130] > # 	"KILL",
	I0626 20:10:15.407320   30564 command_runner.go:130] > # ]
	I0626 20:10:15.407333   30564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 20:10:15.407346   30564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:10:15.407354   30564 command_runner.go:130] > # default_sysctls = [
	I0626 20:10:15.407363   30564 command_runner.go:130] > # ]
	I0626 20:10:15.407376   30564 command_runner.go:130] > # List of devices on the host that a
	I0626 20:10:15.407409   30564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 20:10:15.407423   30564 command_runner.go:130] > # allowed_devices = [
	I0626 20:10:15.407432   30564 command_runner.go:130] > # 	"/dev/fuse",
	I0626 20:10:15.407438   30564 command_runner.go:130] > # ]
	I0626 20:10:15.407450   30564 command_runner.go:130] > # List of additional devices. specified as
	I0626 20:10:15.407465   30564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 20:10:15.407481   30564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 20:10:15.407523   30564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:10:15.407533   30564 command_runner.go:130] > # additional_devices = [
	I0626 20:10:15.407539   30564 command_runner.go:130] > # ]
	I0626 20:10:15.407549   30564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 20:10:15.407559   30564 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 20:10:15.407569   30564 command_runner.go:130] > # 	"/etc/cdi",
	I0626 20:10:15.407579   30564 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 20:10:15.407585   30564 command_runner.go:130] > # ]
	I0626 20:10:15.407598   30564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 20:10:15.407612   30564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 20:10:15.407624   30564 command_runner.go:130] > # Defaults to false.
	I0626 20:10:15.407633   30564 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 20:10:15.407647   30564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 20:10:15.407659   30564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 20:10:15.407669   30564 command_runner.go:130] > # hooks_dir = [
	I0626 20:10:15.407680   30564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 20:10:15.407688   30564 command_runner.go:130] > # ]
	I0626 20:10:15.407699   30564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 20:10:15.407713   30564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 20:10:15.407725   30564 command_runner.go:130] > # its default mounts from the following two files:
	I0626 20:10:15.407733   30564 command_runner.go:130] > #
	I0626 20:10:15.407744   30564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 20:10:15.407758   30564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 20:10:15.407770   30564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 20:10:15.407778   30564 command_runner.go:130] > #
	I0626 20:10:15.407789   30564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 20:10:15.407803   30564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 20:10:15.407817   30564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 20:10:15.407834   30564 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 20:10:15.407842   30564 command_runner.go:130] > #
	I0626 20:10:15.407851   30564 command_runner.go:130] > # default_mounts_file = ""
	I0626 20:10:15.407862   30564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 20:10:15.407877   30564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 20:10:15.407886   30564 command_runner.go:130] > pids_limit = 1024
	I0626 20:10:15.407897   30564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 20:10:15.407910   30564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 20:10:15.407923   30564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 20:10:15.407939   30564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 20:10:15.407950   30564 command_runner.go:130] > # log_size_max = -1
	I0626 20:10:15.407964   30564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 20:10:15.407974   30564 command_runner.go:130] > # log_to_journald = false
	I0626 20:10:15.407984   30564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 20:10:15.407996   30564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 20:10:15.408008   30564 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 20:10:15.408019   30564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 20:10:15.408031   30564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 20:10:15.408044   30564 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 20:10:15.408057   30564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 20:10:15.408066   30564 command_runner.go:130] > # read_only = false
	I0626 20:10:15.408076   30564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 20:10:15.408090   30564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 20:10:15.408100   30564 command_runner.go:130] > # live configuration reload.
	I0626 20:10:15.408108   30564 command_runner.go:130] > # log_level = "info"
	I0626 20:10:15.408120   30564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 20:10:15.408131   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:10:15.408141   30564 command_runner.go:130] > # log_filter = ""
	I0626 20:10:15.408154   30564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 20:10:15.408165   30564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 20:10:15.408175   30564 command_runner.go:130] > # separated by comma.
	I0626 20:10:15.408183   30564 command_runner.go:130] > # uid_mappings = ""
	I0626 20:10:15.408196   30564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 20:10:15.408209   30564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 20:10:15.408219   30564 command_runner.go:130] > # separated by comma.
	I0626 20:10:15.408227   30564 command_runner.go:130] > # gid_mappings = ""
	I0626 20:10:15.408244   30564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 20:10:15.408258   30564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:10:15.408271   30564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:10:15.408282   30564 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 20:10:15.408292   30564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 20:10:15.408306   30564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:10:15.408319   30564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:10:15.408328   30564 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 20:10:15.408341   30564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 20:10:15.408354   30564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 20:10:15.408367   30564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 20:10:15.408377   30564 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 20:10:15.408389   30564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 20:10:15.408402   30564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 20:10:15.408410   30564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 20:10:15.408422   30564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 20:10:15.408434   30564 command_runner.go:130] > drop_infra_ctr = false
	I0626 20:10:15.408448   30564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 20:10:15.408463   30564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 20:10:15.408482   30564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 20:10:15.408492   30564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 20:10:15.408502   30564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 20:10:15.408513   30564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 20:10:15.408524   30564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 20:10:15.408538   30564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 20:10:15.408549   30564 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0626 20:10:15.408560   30564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 20:10:15.408575   30564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 20:10:15.408588   30564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 20:10:15.408596   30564 command_runner.go:130] > # default_runtime = "runc"
	I0626 20:10:15.408608   30564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 20:10:15.408624   30564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 20:10:15.408642   30564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 20:10:15.408654   30564 command_runner.go:130] > # creation as a file is not desired either.
	I0626 20:10:15.408687   30564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 20:10:15.408700   30564 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 20:10:15.408736   30564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 20:10:15.408745   30564 command_runner.go:130] > # ]
	I0626 20:10:15.408756   30564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 20:10:15.408770   30564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 20:10:15.408784   30564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 20:10:15.408797   30564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 20:10:15.408806   30564 command_runner.go:130] > #
	I0626 20:10:15.408814   30564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 20:10:15.408826   30564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 20:10:15.408836   30564 command_runner.go:130] > #  runtime_type = "oci"
	I0626 20:10:15.408849   30564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 20:10:15.408859   30564 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 20:10:15.408867   30564 command_runner.go:130] > #  allowed_annotations = []
	I0626 20:10:15.408876   30564 command_runner.go:130] > # Where:
	I0626 20:10:15.408888   30564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 20:10:15.408901   30564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 20:10:15.408915   30564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 20:10:15.408928   30564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 20:10:15.408940   30564 command_runner.go:130] > #   in $PATH.
	I0626 20:10:15.408954   30564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 20:10:15.408969   30564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 20:10:15.408981   30564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 20:10:15.408988   30564 command_runner.go:130] > #   state.
	I0626 20:10:15.409002   30564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 20:10:15.409015   30564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 20:10:15.409028   30564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 20:10:15.409041   30564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 20:10:15.409054   30564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 20:10:15.409068   30564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 20:10:15.409077   30564 command_runner.go:130] > #   The currently recognized values are:
	I0626 20:10:15.409091   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 20:10:15.409106   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 20:10:15.409119   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 20:10:15.409132   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 20:10:15.409147   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 20:10:15.409161   30564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 20:10:15.409176   30564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 20:10:15.409190   30564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 20:10:15.409202   30564 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 20:10:15.409213   30564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 20:10:15.409221   30564 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0626 20:10:15.409231   30564 command_runner.go:130] > runtime_type = "oci"
	I0626 20:10:15.409240   30564 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 20:10:15.409248   30564 command_runner.go:130] > runtime_config_path = ""
	I0626 20:10:15.409257   30564 command_runner.go:130] > monitor_path = ""
	I0626 20:10:15.409265   30564 command_runner.go:130] > monitor_cgroup = ""
	I0626 20:10:15.409276   30564 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 20:10:15.409289   30564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 20:10:15.409299   30564 command_runner.go:130] > # running containers
	I0626 20:10:15.409309   30564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 20:10:15.409322   30564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 20:10:15.409403   30564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 20:10:15.409416   30564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 20:10:15.409425   30564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 20:10:15.409439   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 20:10:15.409450   30564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 20:10:15.409460   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 20:10:15.409477   30564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 20:10:15.409488   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 20:10:15.409499   30564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 20:10:15.409511   30564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 20:10:15.409521   30564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 20:10:15.409537   30564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 20:10:15.409553   30564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 20:10:15.409566   30564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 20:10:15.409584   30564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 20:10:15.409599   30564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 20:10:15.409612   30564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 20:10:15.409627   30564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 20:10:15.409636   30564 command_runner.go:130] > # Example:
	I0626 20:10:15.409646   30564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 20:10:15.409658   30564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 20:10:15.409675   30564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 20:10:15.409687   30564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 20:10:15.409694   30564 command_runner.go:130] > # cpuset = 0
	I0626 20:10:15.409704   30564 command_runner.go:130] > # cpushares = "0-1"
	I0626 20:10:15.409714   30564 command_runner.go:130] > # Where:
	I0626 20:10:15.409725   30564 command_runner.go:130] > # The workload name is workload-type.
	I0626 20:10:15.409738   30564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 20:10:15.409749   30564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 20:10:15.409762   30564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 20:10:15.409778   30564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 20:10:15.409791   30564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 20:10:15.409800   30564 command_runner.go:130] > # 
	I0626 20:10:15.409811   30564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 20:10:15.409819   30564 command_runner.go:130] > #
	I0626 20:10:15.409830   30564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 20:10:15.409843   30564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 20:10:15.409856   30564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 20:10:15.409870   30564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 20:10:15.409885   30564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 20:10:15.409895   30564 command_runner.go:130] > [crio.image]
	I0626 20:10:15.409906   30564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 20:10:15.409917   30564 command_runner.go:130] > # default_transport = "docker://"
	I0626 20:10:15.409929   30564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 20:10:15.409942   30564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:10:15.409949   30564 command_runner.go:130] > # global_auth_file = ""
	I0626 20:10:15.409961   30564 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 20:10:15.409973   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:10:15.409981   30564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 20:10:15.409995   30564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 20:10:15.410008   30564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:10:15.410019   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:10:15.410029   30564 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 20:10:15.410039   30564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 20:10:15.410052   30564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 20:10:15.410066   30564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 20:10:15.410079   30564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 20:10:15.410092   30564 command_runner.go:130] > # pause_command = "/pause"
	I0626 20:10:15.410106   30564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 20:10:15.410119   30564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 20:10:15.410133   30564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 20:10:15.410147   30564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 20:10:15.410160   30564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 20:10:15.410170   30564 command_runner.go:130] > # signature_policy = ""
	I0626 20:10:15.410183   30564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 20:10:15.410197   30564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 20:10:15.410206   30564 command_runner.go:130] > # changing them here.
	I0626 20:10:15.410214   30564 command_runner.go:130] > # insecure_registries = [
	I0626 20:10:15.410222   30564 command_runner.go:130] > # ]
	I0626 20:10:15.410236   30564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 20:10:15.410248   30564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 20:10:15.410259   30564 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 20:10:15.410267   30564 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 20:10:15.410273   30564 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 20:10:15.410281   30564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 20:10:15.410289   30564 command_runner.go:130] > # CNI plugins.
	I0626 20:10:15.410294   30564 command_runner.go:130] > [crio.network]
	I0626 20:10:15.410302   30564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 20:10:15.410310   30564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 20:10:15.410317   30564 command_runner.go:130] > # cni_default_network = ""
	I0626 20:10:15.410325   30564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 20:10:15.410335   30564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 20:10:15.410344   30564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 20:10:15.410350   30564 command_runner.go:130] > # plugin_dirs = [
	I0626 20:10:15.410356   30564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 20:10:15.410363   30564 command_runner.go:130] > # ]
	I0626 20:10:15.410373   30564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 20:10:15.410380   30564 command_runner.go:130] > [crio.metrics]
	I0626 20:10:15.410389   30564 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 20:10:15.410396   30564 command_runner.go:130] > enable_metrics = true
	I0626 20:10:15.410405   30564 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 20:10:15.410413   30564 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 20:10:15.410422   30564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 20:10:15.410436   30564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 20:10:15.410446   30564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 20:10:15.410453   30564 command_runner.go:130] > # metrics_collectors = [
	I0626 20:10:15.410464   30564 command_runner.go:130] > # 	"operations",
	I0626 20:10:15.410478   30564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 20:10:15.410485   30564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 20:10:15.410492   30564 command_runner.go:130] > # 	"operations_errors",
	I0626 20:10:15.410500   30564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 20:10:15.410511   30564 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 20:10:15.410519   30564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 20:10:15.410530   30564 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 20:10:15.410541   30564 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 20:10:15.410551   30564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 20:10:15.410562   30564 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 20:10:15.410572   30564 command_runner.go:130] > # 	"containers_oom_total",
	I0626 20:10:15.410582   30564 command_runner.go:130] > # 	"containers_oom",
	I0626 20:10:15.410590   30564 command_runner.go:130] > # 	"processes_defunct",
	I0626 20:10:15.410599   30564 command_runner.go:130] > # 	"operations_total",
	I0626 20:10:15.410612   30564 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 20:10:15.410624   30564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 20:10:15.410634   30564 command_runner.go:130] > # 	"operations_errors_total",
	I0626 20:10:15.410642   30564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 20:10:15.410654   30564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 20:10:15.410665   30564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 20:10:15.410673   30564 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 20:10:15.410682   30564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 20:10:15.410690   30564 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 20:10:15.410699   30564 command_runner.go:130] > # ]
	I0626 20:10:15.410708   30564 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 20:10:15.410718   30564 command_runner.go:130] > # metrics_port = 9090
	I0626 20:10:15.410730   30564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 20:10:15.410740   30564 command_runner.go:130] > # metrics_socket = ""
	I0626 20:10:15.410750   30564 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 20:10:15.410763   30564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 20:10:15.410776   30564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 20:10:15.410787   30564 command_runner.go:130] > # certificate on any modification event.
	I0626 20:10:15.410800   30564 command_runner.go:130] > # metrics_cert = ""
	I0626 20:10:15.410812   30564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 20:10:15.410824   30564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 20:10:15.410834   30564 command_runner.go:130] > # metrics_key = ""
	I0626 20:10:15.410847   30564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 20:10:15.410855   30564 command_runner.go:130] > [crio.tracing]
	I0626 20:10:15.410865   30564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 20:10:15.410875   30564 command_runner.go:130] > # enable_tracing = false
	I0626 20:10:15.410888   30564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 20:10:15.410898   30564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 20:10:15.410910   30564 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 20:10:15.410921   30564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 20:10:15.410934   30564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 20:10:15.410942   30564 command_runner.go:130] > [crio.stats]
	I0626 20:10:15.410952   30564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 20:10:15.410965   30564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 20:10:15.410974   30564 command_runner.go:130] > # stats_collection_period = 0
	I0626 20:10:15.411009   30564 command_runner.go:130] ! time="2023-06-26 20:10:15.349694059Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0626 20:10:15.411031   30564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 20:10:15.411138   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:10:15.411155   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:10:15.411167   30564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:10:15.411192   30564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-050558 NodeName:multinode-050558 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:10:15.411350   30564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-050558"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:10:15.411446   30564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-050558 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:10:15.411514   30564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:10:15.422558   30564 command_runner.go:130] > kubeadm
	I0626 20:10:15.422579   30564 command_runner.go:130] > kubectl
	I0626 20:10:15.422585   30564 command_runner.go:130] > kubelet
	I0626 20:10:15.422608   30564 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:10:15.422660   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:10:15.432603   30564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 20:10:15.448596   30564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:10:15.466103   30564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0626 20:10:15.484557   30564 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0626 20:10:15.488475   30564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.229	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:10:15.500349   30564 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558 for IP: 192.168.39.229
	I0626 20:10:15.500374   30564 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:10:15.500509   30564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:10:15.500543   30564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:10:15.500616   30564 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key
	I0626 20:10:15.500662   30564 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key.24f4b2b2
	I0626 20:10:15.500695   30564 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key
	I0626 20:10:15.500704   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0626 20:10:15.500717   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0626 20:10:15.500729   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0626 20:10:15.500743   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0626 20:10:15.500758   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 20:10:15.500770   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 20:10:15.500781   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 20:10:15.500792   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 20:10:15.500853   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:10:15.500878   30564 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:10:15.500887   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:10:15.500910   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:10:15.500936   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:10:15.500958   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:10:15.501002   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:10:15.501034   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:10:15.501047   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem -> /usr/share/ca-certificates/14443.pem
	I0626 20:10:15.501059   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /usr/share/ca-certificates/144432.pem
	I0626 20:10:15.502199   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:10:15.525538   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0626 20:10:15.547196   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:10:15.569728   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 20:10:15.592633   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:10:15.615395   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:10:15.638857   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:10:15.662492   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:10:15.686469   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:10:15.709843   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:10:15.733330   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:10:15.757217   30564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:10:15.774302   30564 ssh_runner.go:195] Run: openssl version
	I0626 20:10:15.779790   30564 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0626 20:10:15.780097   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:10:15.791687   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:10:15.796434   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:10:15.796603   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:10:15.796660   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:10:15.803104   30564 command_runner.go:130] > b5213941
	I0626 20:10:15.803444   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:10:15.815007   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:10:15.827424   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:10:15.832004   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:10:15.832052   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:10:15.832090   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:10:15.837531   30564 command_runner.go:130] > 51391683
	I0626 20:10:15.837592   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:10:15.848408   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:10:15.859477   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:10:15.864118   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:10:15.864368   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:10:15.864427   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:10:15.869775   30564 command_runner.go:130] > 3ec20f2e
	I0626 20:10:15.870014   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:10:15.880688   30564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:10:15.885359   30564 command_runner.go:130] > ca.crt
	I0626 20:10:15.885392   30564 command_runner.go:130] > ca.key
	I0626 20:10:15.885400   30564 command_runner.go:130] > healthcheck-client.crt
	I0626 20:10:15.885407   30564 command_runner.go:130] > healthcheck-client.key
	I0626 20:10:15.885414   30564 command_runner.go:130] > peer.crt
	I0626 20:10:15.885420   30564 command_runner.go:130] > peer.key
	I0626 20:10:15.885425   30564 command_runner.go:130] > server.crt
	I0626 20:10:15.885431   30564 command_runner.go:130] > server.key
	I0626 20:10:15.885513   30564 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:10:15.891451   30564 command_runner.go:130] > Certificate will not expire
	I0626 20:10:15.891510   30564 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:10:15.897241   30564 command_runner.go:130] > Certificate will not expire
	I0626 20:10:15.897531   30564 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:10:15.903467   30564 command_runner.go:130] > Certificate will not expire
	I0626 20:10:15.903534   30564 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:10:15.909079   30564 command_runner.go:130] > Certificate will not expire
	I0626 20:10:15.909489   30564 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:10:15.915092   30564 command_runner.go:130] > Certificate will not expire
	I0626 20:10:15.915271   30564 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:10:15.921055   30564 command_runner.go:130] > Certificate will not expire
	I0626 20:10:15.921422   30564 kubeadm.go:404] StartCluster: {Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:10:15.921561   30564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:10:15.921600   30564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:10:15.956168   30564 cri.go:89] found id: ""
	I0626 20:10:15.956246   30564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:10:15.966970   30564 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0626 20:10:15.966996   30564 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0626 20:10:15.967005   30564 command_runner.go:130] > /var/lib/minikube/etcd:
	I0626 20:10:15.967010   30564 command_runner.go:130] > member
	I0626 20:10:15.967041   30564 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:10:15.967053   30564 kubeadm.go:636] restartCluster start
	I0626 20:10:15.967110   30564 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:10:15.977604   30564 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:15.978163   30564 kubeconfig.go:92] found "multinode-050558" server: "https://192.168.39.229:8443"
	I0626 20:10:15.978547   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:10:15.978748   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:10:15.979454   30564 cert_rotation.go:137] Starting client certificate rotation controller
	I0626 20:10:15.979599   30564 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:10:15.989882   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:15.989941   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:16.002199   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:16.503006   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:16.503081   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:16.515265   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:17.003162   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:17.003243   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:17.015900   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:17.502435   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:17.502540   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:17.516934   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:18.002487   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:18.002558   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:18.016210   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:18.502715   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:18.502803   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:18.514905   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:19.002468   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:19.002547   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:19.014710   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:19.503344   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:19.503449   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:19.516753   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:20.002886   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:20.002966   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:20.016188   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:20.502720   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:20.502800   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:20.516002   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:21.002529   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:21.002605   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:21.015214   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:21.502774   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:21.502869   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:21.515596   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:22.002447   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:22.002553   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:22.015521   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:22.503119   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:22.503186   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:22.515877   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:23.002414   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:23.002503   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:23.014992   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:23.502576   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:23.502664   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:23.515798   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:24.002392   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:24.002471   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:24.015058   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:24.502633   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:24.502714   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:24.516002   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:25.003074   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:25.003144   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:25.015287   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:25.502951   30564 api_server.go:166] Checking apiserver status ...
	I0626 20:10:25.503037   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:10:25.516232   30564 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:10:25.989910   30564 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:10:25.989940   30564 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:10:25.989951   30564 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:10:25.990006   30564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:10:26.021818   30564 cri.go:89] found id: ""
	I0626 20:10:26.021895   30564 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:10:26.038976   30564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:10:26.048836   30564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0626 20:10:26.048864   30564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0626 20:10:26.048875   30564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0626 20:10:26.048889   30564 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:10:26.048984   30564 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:10:26.049056   30564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:10:26.059472   30564 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:10:26.059499   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:10:26.179315   30564 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:10:26.179339   30564 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0626 20:10:26.179345   30564 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0626 20:10:26.179352   30564 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:10:26.179358   30564 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0626 20:10:26.179364   30564 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:10:26.179369   30564 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0626 20:10:26.179374   30564 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0626 20:10:26.179382   30564 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:10:26.179404   30564 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:10:26.179415   30564 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:10:26.179419   30564 command_runner.go:130] > [certs] Using the existing "sa" key
	I0626 20:10:26.179444   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:10:26.234338   30564 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:10:26.376375   30564 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:10:26.445670   30564 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:10:26.576462   30564 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:10:26.857559   30564 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:10:26.860109   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:10:26.941007   30564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:10:26.942338   30564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:10:26.942376   30564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 20:10:27.069403   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:10:27.197426   30564 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:10:27.197451   30564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:10:27.197461   30564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:10:27.197473   30564 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:10:27.197541   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:10:27.262017   30564 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:10:27.266341   30564 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:10:27.266425   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:10:27.779062   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:10:28.279244   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:10:28.779447   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:10:29.279284   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:10:29.305981   30564 command_runner.go:130] > 1073
	I0626 20:10:29.306013   30564 api_server.go:72] duration metric: took 2.039678506s to wait for apiserver process to appear ...
	I0626 20:10:29.306023   30564 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:10:29.306046   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:29.306479   30564 api_server.go:269] stopped: https://192.168.39.229:8443/healthz: Get "https://192.168.39.229:8443/healthz": dial tcp 192.168.39.229:8443: connect: connection refused
	I0626 20:10:29.807442   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:33.658343   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:10:33.658384   30564 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:10:33.658399   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:33.751432   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:10:33.751474   30564 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:10:33.807607   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:33.825833   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:10:33.825869   30564 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:10:34.307383   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:34.314318   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:10:34.314340   30564 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:10:34.807421   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:34.813102   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:10:34.813124   30564 api_server.go:103] status: https://192.168.39.229:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:10:35.306702   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:35.321461   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0626 20:10:35.321550   30564 round_trippers.go:463] GET https://192.168.39.229:8443/version
	I0626 20:10:35.321558   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:35.321570   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:35.321578   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:35.332200   30564 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0626 20:10:35.332228   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:35.332240   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:35.332249   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:35.332260   30564 round_trippers.go:580]     Content-Length: 263
	I0626 20:10:35.332269   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:35 GMT
	I0626 20:10:35.332287   30564 round_trippers.go:580]     Audit-Id: a771fa77-b231-45a7-b23f-a03b172a56ef
	I0626 20:10:35.332295   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:35.332305   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:35.332426   30564 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0626 20:10:35.332540   30564 api_server.go:141] control plane version: v1.27.3
	I0626 20:10:35.332562   30564 api_server.go:131] duration metric: took 6.026532304s to wait for apiserver health ...
	I0626 20:10:35.332572   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:10:35.332591   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:10:35.334642   30564 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0626 20:10:35.336115   30564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 20:10:35.343983   30564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 20:10:35.344015   30564 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0626 20:10:35.344026   30564 command_runner.go:130] > Device: 11h/17d	Inode: 3543        Links: 1
	I0626 20:10:35.344036   30564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:10:35.344045   30564 command_runner.go:130] > Access: 2023-06-26 20:10:02.269403478 +0000
	I0626 20:10:35.344054   30564 command_runner.go:130] > Modify: 2023-06-22 22:21:30.000000000 +0000
	I0626 20:10:35.344062   30564 command_runner.go:130] > Change: 2023-06-26 20:10:00.284403478 +0000
	I0626 20:10:35.344070   30564 command_runner.go:130] >  Birth: -
	I0626 20:10:35.344271   30564 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 20:10:35.344289   30564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 20:10:35.393799   30564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 20:10:36.637339   30564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:10:36.637386   30564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:10:36.637396   30564 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0626 20:10:36.637404   30564 command_runner.go:130] > daemonset.apps/kindnet configured
	I0626 20:10:36.637463   30564 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.243623685s)
	I0626 20:10:36.637490   30564 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:10:36.637579   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:36.637589   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.637601   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.637610   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.642542   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:36.642569   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.642579   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.642589   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.642597   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.642606   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.642616   30564 round_trippers.go:580]     Audit-Id: e5f1fe3b-d71d-4198-9cbf-b19a6a1016aa
	I0626 20:10:36.642628   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.644460   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82080 chars]
	I0626 20:10:36.648446   30564 system_pods.go:59] 12 kube-system pods found
	I0626 20:10:36.648478   30564 system_pods.go:61] "coredns-5d78c9869d-5wffn" [c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:10:36.648489   30564 system_pods.go:61] "etcd-multinode-050558" [457d2420-8ece-4b92-8281-7866fa6a884a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:10:36.648495   30564 system_pods.go:61] "kindnet-9tprm" [23cc17d6-1401-413a-8f2f-71931b10ae4e] Running
	I0626 20:10:36.648499   30564 system_pods.go:61] "kindnet-kmcqm" [ae3400e2-ef47-4a1a-ade8-dde5988de08e] Running
	I0626 20:10:36.648508   30564 system_pods.go:61] "kindnet-vjpzs" [695a59a7-ddfd-4f5f-8084-86279daa17b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0626 20:10:36.648516   30564 system_pods.go:61] "kube-apiserver-multinode-050558" [00573436-b505-4be6-a86a-3ba9b74e1ad5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:10:36.648527   30564 system_pods.go:61] "kube-controller-manager-multinode-050558" [d90eb1a6-03bd-4bdf-b50d-9448cef0b578] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:10:36.648533   30564 system_pods.go:61] "kube-proxy-57pwt" [4611d3e6-962b-437a-8b38-387719e69da6] Running
	I0626 20:10:36.648538   30564 system_pods.go:61] "kube-proxy-67x99" [7ffa817a-1b4a-41a1-9a56-5c65849dc57e] Running
	I0626 20:10:36.648543   30564 system_pods.go:61] "kube-proxy-wwg6x" [bdb04dda-dd36-45be-8f0e-7dad2bce1ef0] Running
	I0626 20:10:36.648548   30564 system_pods.go:61] "kube-scheduler-multinode-050558" [1645e687-25f4-49b9-9d11-5f3db01fe7d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:10:36.648556   30564 system_pods.go:61] "storage-provisioner" [fd433ce1-f37e-4168-930f-a93cd00821cb] Running
	I0626 20:10:36.648563   30564 system_pods.go:74] duration metric: took 11.066869ms to wait for pod list to return data ...
	I0626 20:10:36.648574   30564 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:10:36.648636   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes
	I0626 20:10:36.648647   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.648657   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.648666   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.651255   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:36.651276   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.651285   30564 round_trippers.go:580]     Audit-Id: c09bfc9f-3a29-44c6-a93c-849a604f8e43
	I0626 20:10:36.651295   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.651304   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.651312   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.651320   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.651329   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.651622   30564 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"744"},"items":[{"metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15372 chars]
	I0626 20:10:36.652456   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:10:36.652479   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:10:36.652488   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:10:36.652492   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:10:36.652498   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:10:36.652502   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:10:36.652507   30564 node_conditions.go:105] duration metric: took 3.927533ms to run NodePressure ...
	I0626 20:10:36.652522   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:10:36.807627   30564 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0626 20:10:36.878062   30564 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0626 20:10:36.879742   30564 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:10:36.879866   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0626 20:10:36.879881   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.879893   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.879904   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.883595   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:36.883620   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.883631   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.883638   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.883645   30564 round_trippers.go:580]     Audit-Id: b74c3492-d7c7-45e8-8664-e0909226248d
	I0626 20:10:36.883651   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.883656   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.883662   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.884397   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"747"},"items":[{"metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"733","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0626 20:10:36.885588   30564 kubeadm.go:787] kubelet initialised
	I0626 20:10:36.885614   30564 kubeadm.go:788] duration metric: took 5.846772ms waiting for restarted kubelet to initialise ...
	I0626 20:10:36.885623   30564 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:10:36.885691   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:36.885703   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.885716   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.885725   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.889738   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:36.889754   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.889761   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.889768   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.889777   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.889791   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.889800   30564 round_trippers.go:580]     Audit-Id: 1a1e4c37-4421-4ed9-b00c-d6555ea62f64
	I0626 20:10:36.889806   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.891790   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"747"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82649 chars]
	I0626 20:10:36.895278   30564 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:36.895363   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:36.895374   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.895385   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.895395   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.900228   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:36.900250   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.900261   30564 round_trippers.go:580]     Audit-Id: 50cbbb83-f023-4f98-8f35-e0cf68c7bb58
	I0626 20:10:36.900269   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.900278   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.900286   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.900296   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.900305   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.900438   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:36.900994   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:36.901014   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.901025   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.901034   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.903411   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:36.903425   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.903432   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.903437   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.903443   30564 round_trippers.go:580]     Audit-Id: ff10fbbc-c4db-4acc-a2a8-6afd0ea694cb
	I0626 20:10:36.903450   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.903460   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.903474   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.903927   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:36.904307   30564 pod_ready.go:97] node "multinode-050558" hosting pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:36.904326   30564 pod_ready.go:81] duration metric: took 9.024092ms waiting for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	E0626 20:10:36.904334   30564 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-050558" hosting pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:36.904348   30564 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:36.904404   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:10:36.904414   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.904422   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.904429   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.906418   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:36.906440   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.906449   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.906457   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.906466   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.906474   30564 round_trippers.go:580]     Audit-Id: 5fbd2d3d-0de4-4eac-a302-42f15518b02a
	I0626 20:10:36.906482   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.906491   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.906719   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"733","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0626 20:10:36.907072   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:36.907087   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.907097   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.907105   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.909167   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:36.909185   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.909196   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.909205   30564 round_trippers.go:580]     Audit-Id: 4f370977-3c15-487c-b0b4-75c7670001b4
	I0626 20:10:36.909213   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.909221   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.909237   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.909245   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.909360   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:36.909640   30564 pod_ready.go:97] node "multinode-050558" hosting pod "etcd-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:36.909657   30564 pod_ready.go:81] duration metric: took 5.301243ms waiting for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	E0626 20:10:36.909665   30564 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-050558" hosting pod "etcd-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:36.909685   30564 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:36.909736   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:36.909745   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.909756   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.909768   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.911728   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:36.911748   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.911758   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.911767   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.911780   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.911804   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.911816   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.911825   30564 round_trippers.go:580]     Audit-Id: 5f65eea2-99af-4197-9122-92567e12e92c
	I0626 20:10:36.911948   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:36.912412   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:36.912426   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.912433   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.912440   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.914153   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:36.914168   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.914176   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.914185   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.914194   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.914208   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.914224   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.914237   30564 round_trippers.go:580]     Audit-Id: 31dda1fd-34f0-4d28-907a-b7269d7e35b3
	I0626 20:10:36.914325   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:36.914681   30564 pod_ready.go:97] node "multinode-050558" hosting pod "kube-apiserver-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:36.914697   30564 pod_ready.go:81] duration metric: took 5.003045ms waiting for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	E0626 20:10:36.914708   30564 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-050558" hosting pod "kube-apiserver-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:36.914720   30564 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:36.914775   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-050558
	I0626 20:10:36.914784   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:36.914796   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:36.914808   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:36.916908   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:36.916931   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:36.916941   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:36.916950   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:36 GMT
	I0626 20:10:36.916958   30564 round_trippers.go:580]     Audit-Id: 3df7e442-fb9e-4cc2-a702-7e1a5cb5e51e
	I0626 20:10:36.916967   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:36.916974   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:36.916985   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:36.917178   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-050558","namespace":"kube-system","uid":"d90eb1a6-03bd-4bdf-b50d-9448cef0b578","resourceVersion":"735","creationTimestamp":"2023-06-26T20:00:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.mirror":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.seen":"2023-06-26T20:00:04.802665770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0626 20:10:37.038000   30564 request.go:628] Waited for 120.321968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:37.038059   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:37.038064   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:37.038071   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:37.038077   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:37.040914   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:37.040940   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:37.040950   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:37 GMT
	I0626 20:10:37.040959   30564 round_trippers.go:580]     Audit-Id: ef521492-c5a8-4eba-8bba-ce8228378216
	I0626 20:10:37.040967   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:37.040975   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:37.040986   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:37.040994   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:37.041168   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:37.041592   30564 pod_ready.go:97] node "multinode-050558" hosting pod "kube-controller-manager-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:37.041612   30564 pod_ready.go:81] duration metric: took 126.880277ms waiting for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	E0626 20:10:37.041630   30564 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-050558" hosting pod "kube-controller-manager-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:37.041640   30564 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:37.238115   30564 request.go:628] Waited for 196.418038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:10:37.238171   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:10:37.238176   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:37.238183   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:37.238190   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:37.241607   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:37.241626   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:37.241633   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:37 GMT
	I0626 20:10:37.241639   30564 round_trippers.go:580]     Audit-Id: a87c265d-1106-47d3-a549-531fdfb6d1c8
	I0626 20:10:37.241644   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:37.241652   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:37.241660   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:37.241669   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:37.241813   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-57pwt","generateName":"kube-proxy-","namespace":"kube-system","uid":"4611d3e6-962b-437a-8b38-387719e69da6","resourceVersion":"685","creationTimestamp":"2023-06-26T20:01:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:01:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0626 20:10:37.438549   30564 request.go:628] Waited for 196.213333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:10:37.438613   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:10:37.438619   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:37.438629   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:37.438638   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:37.441158   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:37.441180   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:37.441187   30564 round_trippers.go:580]     Audit-Id: b14586a3-3e82-4e1d-8e0c-01a3d85be76f
	I0626 20:10:37.441193   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:37.441198   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:37.441203   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:37.441208   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:37.441214   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:37 GMT
	I0626 20:10:37.441389   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m03","uid":"0d94d9a3-b2d7-4a89-99ad-2d23c494ddb0","resourceVersion":"711","creationTimestamp":"2023-06-26T20:02:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I0626 20:10:37.441657   30564 pod_ready.go:92] pod "kube-proxy-57pwt" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:37.441672   30564 pod_ready.go:81] duration metric: took 400.024238ms waiting for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:37.441690   30564 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:37.638096   30564 request.go:628] Waited for 196.343359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:10:37.638180   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:10:37.638187   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:37.638198   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:37.638211   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:37.641352   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:37.641393   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:37.641408   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:37.641417   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:37.641433   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:37.641442   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:37 GMT
	I0626 20:10:37.641450   30564 round_trippers.go:580]     Audit-Id: d077bad2-9a9b-4bf8-ab78-e9b0af4a41e2
	I0626 20:10:37.641461   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:37.642025   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-67x99","generateName":"kube-proxy-","namespace":"kube-system","uid":"7ffa817a-1b4a-41a1-9a56-5c65849dc57e","resourceVersion":"744","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 20:10:37.837764   30564 request.go:628] Waited for 195.286154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:37.837833   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:37.837839   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:37.837846   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:37.837854   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:37.841863   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:37.841889   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:37.841899   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:37.841908   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:37.841916   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:37 GMT
	I0626 20:10:37.841925   30564 round_trippers.go:580]     Audit-Id: 16ddc4c7-b696-4a5d-915f-0be957476673
	I0626 20:10:37.841933   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:37.841941   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:37.842763   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:37.843122   30564 pod_ready.go:97] node "multinode-050558" hosting pod "kube-proxy-67x99" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:37.843148   30564 pod_ready.go:81] duration metric: took 401.451545ms waiting for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	E0626 20:10:37.843159   30564 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-050558" hosting pod "kube-proxy-67x99" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:37.843168   30564 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:38.038669   30564 request.go:628] Waited for 195.412668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:10:38.038750   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:10:38.038757   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:38.038766   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:38.038778   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:38.041825   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:38.041843   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:38.041851   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:38.041856   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:38.041862   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:38 GMT
	I0626 20:10:38.041867   30564 round_trippers.go:580]     Audit-Id: 89c437fc-f0a8-45c5-b251-a8a531ea864e
	I0626 20:10:38.041873   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:38.041880   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:38.042023   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wwg6x","generateName":"kube-proxy-","namespace":"kube-system","uid":"bdb04dda-dd36-45be-8f0e-7dad2bce1ef0","resourceVersion":"478","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0626 20:10:38.237703   30564 request.go:628] Waited for 195.277535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:10:38.237780   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:10:38.237792   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:38.237802   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:38.237813   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:38.241564   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:38.241587   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:38.241604   30564 round_trippers.go:580]     Audit-Id: e3b8dfca-b310-43b2-9d62-eddf99cf62fc
	I0626 20:10:38.241612   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:38.241621   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:38.241637   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:38.241649   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:38.241659   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:38 GMT
	I0626 20:10:38.241868   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"710","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0626 20:10:38.242141   30564 pod_ready.go:92] pod "kube-proxy-wwg6x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:38.242156   30564 pod_ready.go:81] duration metric: took 398.981393ms waiting for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:38.242165   30564 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:38.438539   30564 request.go:628] Waited for 196.304925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:10:38.438586   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:10:38.438591   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:38.438602   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:38.438608   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:38.441304   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:38.441324   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:38.441333   30564 round_trippers.go:580]     Audit-Id: 5ed0d4c3-ad53-4666-b1b7-1ab27f277404
	I0626 20:10:38.441341   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:38.441348   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:38.441356   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:38.441368   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:38.441394   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:38 GMT
	I0626 20:10:38.441562   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-050558","namespace":"kube-system","uid":"1645e687-25f4-49b9-9d11-5f3db01fe7d2","resourceVersion":"732","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.mirror":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.seen":"2023-06-26T19:59:55.756274617Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0626 20:10:38.638301   30564 request.go:628] Waited for 196.393733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:38.638376   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:38.638381   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:38.638388   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:38.638395   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:38.641326   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:38.641349   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:38.641360   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:38 GMT
	I0626 20:10:38.641384   30564 round_trippers.go:580]     Audit-Id: f44ec27d-be4d-48f9-a705-fa59cca6d6d0
	I0626 20:10:38.641393   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:38.641402   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:38.641409   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:38.641416   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:38.641671   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:38.641967   30564 pod_ready.go:97] node "multinode-050558" hosting pod "kube-scheduler-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:38.641982   30564 pod_ready.go:81] duration metric: took 399.810437ms waiting for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	E0626 20:10:38.641989   30564 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-050558" hosting pod "kube-scheduler-multinode-050558" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-050558" has status "Ready":"False"
	I0626 20:10:38.641997   30564 pod_ready.go:38] duration metric: took 1.756364354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:10:38.642019   30564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:10:38.656749   30564 command_runner.go:130] > -16
	I0626 20:10:38.656797   30564 ops.go:34] apiserver oom_adj: -16
	I0626 20:10:38.656810   30564 kubeadm.go:640] restartCluster took 22.689751427s
	I0626 20:10:38.656817   30564 kubeadm.go:406] StartCluster complete in 22.735404708s
	I0626 20:10:38.656833   30564 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:10:38.656923   30564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:10:38.657505   30564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:10:38.657733   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:10:38.657935   30564 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:10:38.658065   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:10:38.658103   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:10:38.660854   30564 out.go:177] * Enabled addons: 
	I0626 20:10:38.658391   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:10:38.662075   30564 addons.go:499] enable addons completed in 4.17596ms: enabled=[]
	I0626 20:10:38.662322   30564 round_trippers.go:463] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:10:38.662335   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:38.662343   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:38.662349   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:38.665512   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:38.665526   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:38.665533   30564 round_trippers.go:580]     Audit-Id: aada58ac-3d04-4ab7-9d8f-792f26c980f8
	I0626 20:10:38.665538   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:38.665544   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:38.665549   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:38.665557   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:38.665565   30564 round_trippers.go:580]     Content-Length: 291
	I0626 20:10:38.665579   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:38 GMT
	I0626 20:10:38.665629   30564 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"745","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0626 20:10:38.665779   30564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-050558" context rescaled to 1 replicas
	I0626 20:10:38.665805   30564 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:10:38.668329   30564 out.go:177] * Verifying Kubernetes components...
	I0626 20:10:38.669820   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:10:38.771100   30564 command_runner.go:130] > apiVersion: v1
	I0626 20:10:38.771120   30564 command_runner.go:130] > data:
	I0626 20:10:38.771126   30564 command_runner.go:130] >   Corefile: |
	I0626 20:10:38.771132   30564 command_runner.go:130] >     .:53 {
	I0626 20:10:38.771137   30564 command_runner.go:130] >         log
	I0626 20:10:38.771150   30564 command_runner.go:130] >         errors
	I0626 20:10:38.771156   30564 command_runner.go:130] >         health {
	I0626 20:10:38.771164   30564 command_runner.go:130] >            lameduck 5s
	I0626 20:10:38.771168   30564 command_runner.go:130] >         }
	I0626 20:10:38.771176   30564 command_runner.go:130] >         ready
	I0626 20:10:38.771183   30564 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0626 20:10:38.771190   30564 command_runner.go:130] >            pods insecure
	I0626 20:10:38.771208   30564 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0626 20:10:38.771218   30564 command_runner.go:130] >            ttl 30
	I0626 20:10:38.771224   30564 command_runner.go:130] >         }
	I0626 20:10:38.771231   30564 command_runner.go:130] >         prometheus :9153
	I0626 20:10:38.771237   30564 command_runner.go:130] >         hosts {
	I0626 20:10:38.771243   30564 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0626 20:10:38.771250   30564 command_runner.go:130] >            fallthrough
	I0626 20:10:38.771254   30564 command_runner.go:130] >         }
	I0626 20:10:38.771260   30564 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0626 20:10:38.771265   30564 command_runner.go:130] >            max_concurrent 1000
	I0626 20:10:38.771268   30564 command_runner.go:130] >         }
	I0626 20:10:38.771273   30564 command_runner.go:130] >         cache 30
	I0626 20:10:38.771278   30564 command_runner.go:130] >         loop
	I0626 20:10:38.771284   30564 command_runner.go:130] >         reload
	I0626 20:10:38.771288   30564 command_runner.go:130] >         loadbalance
	I0626 20:10:38.771293   30564 command_runner.go:130] >     }
	I0626 20:10:38.771297   30564 command_runner.go:130] > kind: ConfigMap
	I0626 20:10:38.771300   30564 command_runner.go:130] > metadata:
	I0626 20:10:38.771315   30564 command_runner.go:130] >   creationTimestamp: "2023-06-26T20:00:04Z"
	I0626 20:10:38.771319   30564 command_runner.go:130] >   name: coredns
	I0626 20:10:38.771323   30564 command_runner.go:130] >   namespace: kube-system
	I0626 20:10:38.771326   30564 command_runner.go:130] >   resourceVersion: "363"
	I0626 20:10:38.771331   30564 command_runner.go:130] >   uid: d6a9305d-4072-4b2f-9835-f4e058f49445
	I0626 20:10:38.771408   30564 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0626 20:10:38.771408   30564 node_ready.go:35] waiting up to 6m0s for node "multinode-050558" to be "Ready" ...
	I0626 20:10:38.837639   30564 request.go:628] Waited for 66.156312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:38.837690   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:38.837696   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:38.837703   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:38.837709   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:38.841145   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:38.841170   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:38.841180   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:38.841190   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:38 GMT
	I0626 20:10:38.841198   30564 round_trippers.go:580]     Audit-Id: ee8fd633-78a4-41ac-b597-2dcfb309d40f
	I0626 20:10:38.841206   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:38.841214   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:38.841222   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:38.841327   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:39.342604   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:39.342628   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:39.342642   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:39.342648   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:39.345598   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:39.345620   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:39.345630   30564 round_trippers.go:580]     Audit-Id: 5d6b9989-bdd8-48d7-b6b5-bbaa29bdd16b
	I0626 20:10:39.345639   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:39.345648   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:39.345656   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:39.345663   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:39.345670   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:39 GMT
	I0626 20:10:39.345910   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"712","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0626 20:10:39.841915   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:39.841938   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:39.841946   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:39.841952   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:39.844486   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:39.844508   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:39.844517   30564 round_trippers.go:580]     Audit-Id: b339a3bc-80f4-4069-b625-471e5f3bb887
	I0626 20:10:39.844525   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:39.844532   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:39.844541   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:39.844552   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:39.844562   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:39 GMT
	I0626 20:10:39.845135   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:39.845486   30564 node_ready.go:49] node "multinode-050558" has status "Ready":"True"
	I0626 20:10:39.845502   30564 node_ready.go:38] duration metric: took 1.074075454s waiting for node "multinode-050558" to be "Ready" ...
	I0626 20:10:39.845513   30564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:10:39.845574   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:39.845584   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:39.845595   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:39.845605   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:39.849323   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:39.849346   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:39.849357   30564 round_trippers.go:580]     Audit-Id: 1bc4b0b6-61bc-48ba-a07f-9b1582a59b01
	I0626 20:10:39.849366   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:39.849383   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:39.849393   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:39.849403   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:39.849411   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:39 GMT
	I0626 20:10:39.851388   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"829"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I0626 20:10:39.853901   30564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:39.853967   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:39.853978   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:39.853987   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:39.853994   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:39.857260   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:39.857274   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:39.857280   30564 round_trippers.go:580]     Audit-Id: 5cc075cf-5f7d-4095-9683-75fb0ed5fcea
	I0626 20:10:39.857286   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:39.857291   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:39.857298   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:39.857306   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:39.857314   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:39 GMT
	I0626 20:10:39.857517   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:39.858119   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:39.858135   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:39.858143   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:39.858149   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:39.860556   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:39.860574   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:39.860583   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:39.860591   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:39 GMT
	I0626 20:10:39.860613   30564 round_trippers.go:580]     Audit-Id: dd649568-6337-40e2-b0d6-2b174549791d
	I0626 20:10:39.860626   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:39.860636   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:39.860651   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:39.860877   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:40.362120   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:40.362194   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:40.362207   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:40.362218   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:40.365996   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:40.366024   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:40.366035   30564 round_trippers.go:580]     Audit-Id: 66508c33-be9e-45a2-8e1f-a719aa7a64ae
	I0626 20:10:40.366043   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:40.366052   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:40.366060   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:40.366069   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:40.366081   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:40 GMT
	I0626 20:10:40.366261   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:40.366740   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:40.366753   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:40.366764   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:40.366776   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:40.369048   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:40.369064   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:40.369073   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:40.369081   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:40.369090   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:40 GMT
	I0626 20:10:40.369099   30564 round_trippers.go:580]     Audit-Id: 6fa363e4-518b-41f7-a150-41702ff40b16
	I0626 20:10:40.369108   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:40.369114   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:40.369510   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:40.862215   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:40.862239   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:40.862248   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:40.862254   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:40.865913   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:40.865940   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:40.865953   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:40.865961   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:40.865968   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:40.865973   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:40.865978   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:40 GMT
	I0626 20:10:40.865983   30564 round_trippers.go:580]     Audit-Id: 648728db-36f1-43f8-9465-c3073404703a
	I0626 20:10:40.866673   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:40.867222   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:40.867239   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:40.867251   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:40.867261   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:40.869365   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:40.869400   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:40.869410   30564 round_trippers.go:580]     Audit-Id: 0b62cd20-2a03-4245-b1d0-ec5117044b2e
	I0626 20:10:40.869419   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:40.869427   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:40.869437   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:40.869445   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:40.869458   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:40 GMT
	I0626 20:10:40.869617   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:41.362298   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:41.362324   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:41.362332   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:41.362338   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:41.365454   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:41.365489   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:41.365500   30564 round_trippers.go:580]     Audit-Id: 204b18da-8188-4829-894e-ba8850e15729
	I0626 20:10:41.365506   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:41.365511   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:41.365517   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:41.365522   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:41.365528   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:41 GMT
	I0626 20:10:41.365775   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:41.366416   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:41.366432   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:41.366440   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:41.366446   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:41.368879   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:41.368893   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:41.368900   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:41 GMT
	I0626 20:10:41.368906   30564 round_trippers.go:580]     Audit-Id: ba818f7e-e4af-4e09-85b0-ec0b791eeb4d
	I0626 20:10:41.368911   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:41.368920   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:41.368928   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:41.368937   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:41.369248   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:41.862243   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:41.862284   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:41.862302   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:41.862312   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:41.865281   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:41.865303   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:41.865313   30564 round_trippers.go:580]     Audit-Id: 15d0c066-f526-4a30-8aa5-79f1ce6abcd5
	I0626 20:10:41.865322   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:41.865331   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:41.865343   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:41.865353   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:41.865365   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:41 GMT
	I0626 20:10:41.865529   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:41.865958   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:41.865972   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:41.865983   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:41.865992   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:41.868385   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:41.868402   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:41.868408   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:41.868414   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:41.868419   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:41.868427   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:41 GMT
	I0626 20:10:41.868436   30564 round_trippers.go:580]     Audit-Id: 3b5f31f7-41f9-49f9-83c9-206cb8f239cc
	I0626 20:10:41.868448   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:41.868639   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:41.868965   30564 pod_ready.go:102] pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:10:42.361347   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:42.361394   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:42.361406   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:42.361416   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:42.373496   30564 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0626 20:10:42.373522   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:42.373529   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:42 GMT
	I0626 20:10:42.373535   30564 round_trippers.go:580]     Audit-Id: 17c8c877-ee45-4d9b-9ccb-25465a53c587
	I0626 20:10:42.373540   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:42.373546   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:42.373551   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:42.373557   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:42.374051   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:42.374601   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:42.374618   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:42.374633   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:42.374645   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:42.377022   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:42.377038   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:42.377047   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:42 GMT
	I0626 20:10:42.377055   30564 round_trippers.go:580]     Audit-Id: bd9b6dd0-7707-437e-bcb9-c65bedb2c028
	I0626 20:10:42.377064   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:42.377073   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:42.377082   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:42.377088   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:42.377219   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:42.861930   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:42.861957   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:42.861968   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:42.861978   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:42.866862   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:42.866884   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:42.866891   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:42 GMT
	I0626 20:10:42.866902   30564 round_trippers.go:580]     Audit-Id: 7fbfdef5-bacf-4e22-95af-e13d443cc81f
	I0626 20:10:42.866911   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:42.866919   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:42.866934   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:42.866942   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:42.868051   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:42.868483   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:42.868497   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:42.868504   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:42.868511   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:42.872799   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:42.872820   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:42.872830   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:42.872840   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:42.872848   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:42.872857   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:42.872865   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:42 GMT
	I0626 20:10:42.872873   30564 round_trippers.go:580]     Audit-Id: 6be41811-eb82-44d0-9266-cb93d1b2ae0a
	I0626 20:10:42.873195   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:43.361847   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:43.361871   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.361879   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.361886   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.364966   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:43.364989   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.364999   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.365006   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.365014   30564 round_trippers.go:580]     Audit-Id: 7487a621-3341-4f98-b82f-1479b686e538
	I0626 20:10:43.365023   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.365030   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.365038   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.365308   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"737","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0626 20:10:43.365730   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:43.365741   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.365748   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.365754   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.368460   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:43.368482   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.368492   30564 round_trippers.go:580]     Audit-Id: b09e31ec-b99f-49a4-980f-65d44842d7c3
	I0626 20:10:43.368501   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.368510   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.368518   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.368526   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.368535   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.368921   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:43.861538   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:10:43.861559   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.861567   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.861573   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.864407   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:43.864428   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.864435   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.864441   30564 round_trippers.go:580]     Audit-Id: 7a5261f7-4734-4c21-8833-6e87ecb25334
	I0626 20:10:43.864446   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.864451   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.864456   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.864472   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.864736   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0626 20:10:43.865188   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:43.865202   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.865209   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.865216   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.867475   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:43.867489   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.867496   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.867502   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.867507   30564 round_trippers.go:580]     Audit-Id: c7a131cc-4eb2-43f8-8a39-5ab086dddb52
	I0626 20:10:43.867515   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.867523   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.867534   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.867709   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:43.868015   30564 pod_ready.go:92] pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:43.868030   30564 pod_ready.go:81] duration metric: took 4.014110647s waiting for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:43.868038   30564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:43.868087   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:10:43.868096   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.868103   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.868112   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.870317   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:43.870342   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.870351   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.870360   30564 round_trippers.go:580]     Audit-Id: 355e4849-b7be-4dd4-8172-9bf36723e7b3
	I0626 20:10:43.870368   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.870377   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.870386   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.870393   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.870525   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"832","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0626 20:10:43.870864   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:43.870876   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.870883   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.870890   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.872837   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:43.872856   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.872865   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.872871   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.872879   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.872885   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.872897   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.872902   30564 round_trippers.go:580]     Audit-Id: 0e4b1c09-02f6-4943-bb0b-6fd6d86422db
	I0626 20:10:43.873044   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:43.873395   30564 pod_ready.go:92] pod "etcd-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:43.873415   30564 pod_ready.go:81] duration metric: took 5.371652ms waiting for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:43.873430   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:43.873471   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:43.873479   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.873485   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.873493   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.875732   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:43.875760   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.875769   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.875778   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.875788   30564 round_trippers.go:580]     Audit-Id: a5e3942d-8847-4de4-9851-74a4209e2a25
	I0626 20:10:43.875797   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.875806   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.875814   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.876004   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:43.876440   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:43.876454   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:43.876462   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:43.876468   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:43.878355   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:43.878371   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:43.878377   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:43.878383   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:43.878388   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:43.878393   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:43 GMT
	I0626 20:10:43.878401   30564 round_trippers.go:580]     Audit-Id: d1654876-d193-4a45-9c3f-f63828ad77b5
	I0626 20:10:43.878410   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:43.878560   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:44.379447   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:44.379472   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:44.379480   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:44.379486   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:44.383628   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:44.383655   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:44.383665   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:44.383674   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:44.383682   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:44.383690   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:44 GMT
	I0626 20:10:44.383698   30564 round_trippers.go:580]     Audit-Id: 57666224-d42e-4978-a316-2c4327d4e00d
	I0626 20:10:44.383706   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:44.383875   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:44.384296   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:44.384313   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:44.384324   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:44.384333   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:44.386582   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:44.386604   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:44.386614   30564 round_trippers.go:580]     Audit-Id: 7402ca56-8834-4db9-8414-5efad091c5d6
	I0626 20:10:44.386626   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:44.386634   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:44.386643   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:44.386652   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:44.386663   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:44 GMT
	I0626 20:10:44.386933   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:44.880060   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:44.880082   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:44.880090   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:44.880096   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:44.882806   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:44.882825   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:44.882832   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:44.882837   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:44.882843   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:44.882849   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:44.882854   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:44 GMT
	I0626 20:10:44.882859   30564 round_trippers.go:580]     Audit-Id: 67815a4e-7ebc-4acc-b443-80bfe27961da
	I0626 20:10:44.883194   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:44.883579   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:44.883590   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:44.883598   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:44.883603   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:44.885920   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:44.885939   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:44.885950   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:44.885958   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:44.885968   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:44.885978   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:44 GMT
	I0626 20:10:44.885991   30564 round_trippers.go:580]     Audit-Id: 056bb108-1c22-49c6-9b98-b9dedee061ac
	I0626 20:10:44.885999   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:44.886107   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:45.379671   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:45.379705   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:45.379713   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:45.379719   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:45.382738   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:45.382761   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:45.382772   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:45.382782   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:45.382791   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:45.382804   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:45 GMT
	I0626 20:10:45.382816   30564 round_trippers.go:580]     Audit-Id: bebaa852-76ae-41d7-96c0-e160639dc0e2
	I0626 20:10:45.382840   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:45.383528   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:45.384174   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:45.384203   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:45.384215   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:45.384228   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:45.386500   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:45.386514   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:45.386520   30564 round_trippers.go:580]     Audit-Id: a23ad175-cd52-40d5-8849-dd6858747bd0
	I0626 20:10:45.386526   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:45.386531   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:45.386536   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:45.386541   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:45.386546   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:45 GMT
	I0626 20:10:45.386861   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:45.879453   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:45.879476   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:45.879485   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:45.879491   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:45.882459   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:45.882486   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:45.882497   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:45.882506   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:45.882514   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:45 GMT
	I0626 20:10:45.882524   30564 round_trippers.go:580]     Audit-Id: bc308f30-9579-4413-8064-27a4107613d0
	I0626 20:10:45.882532   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:45.882539   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:45.882902   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:45.883322   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:45.883334   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:45.883341   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:45.883349   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:45.885644   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:45.885662   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:45.885672   30564 round_trippers.go:580]     Audit-Id: 10d6b49f-51f8-4e1b-b804-c4f456cb5cf0
	I0626 20:10:45.885680   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:45.885688   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:45.885698   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:45.885708   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:45.885718   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:45 GMT
	I0626 20:10:45.885957   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:45.886286   30564 pod_ready.go:102] pod "kube-apiserver-multinode-050558" in "kube-system" namespace has status "Ready":"False"
	I0626 20:10:46.379836   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:46.379859   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:46.379869   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:46.379878   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:46.384899   30564 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0626 20:10:46.384917   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:46.384924   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:46.384930   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:46.384938   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:46.384954   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:46.384968   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:46 GMT
	I0626 20:10:46.384976   30564 round_trippers.go:580]     Audit-Id: e3adbb49-148a-4dc3-84b1-cd347a79a650
	I0626 20:10:46.385612   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:46.386030   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:46.386043   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:46.386050   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:46.386056   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:46.397312   30564 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0626 20:10:46.397332   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:46.397341   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:46 GMT
	I0626 20:10:46.397349   30564 round_trippers.go:580]     Audit-Id: f13ffbac-cb72-4b91-9c71-77f8f440fec7
	I0626 20:10:46.397357   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:46.397365   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:46.397382   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:46.397392   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:46.397543   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:46.879339   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:46.879369   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:46.879380   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:46.879390   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:46.882396   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:46.882423   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:46.882434   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:46.882443   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:46.882452   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:46.882460   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:46.882469   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:46 GMT
	I0626 20:10:46.882476   30564 round_trippers.go:580]     Audit-Id: 044d4ffc-72a4-47e3-ad6f-be2fe1943551
	I0626 20:10:46.882672   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:46.883162   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:46.883176   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:46.883183   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:46.883189   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:46.887328   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:46.887350   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:46.887359   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:46.887369   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:46 GMT
	I0626 20:10:46.887378   30564 round_trippers.go:580]     Audit-Id: 9a12b7b5-24ba-44f5-bbdc-ba7b1112bca4
	I0626 20:10:46.887386   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:46.887393   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:46.887401   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:46.888400   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:47.379069   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:47.379098   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:47.379109   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:47.379119   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:47.382186   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:47.382206   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:47.382213   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:47.382219   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:47 GMT
	I0626 20:10:47.382224   30564 round_trippers.go:580]     Audit-Id: 3c3f0e24-4047-45aa-85b0-0ff57c2a94ae
	I0626 20:10:47.382230   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:47.382235   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:47.382240   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:47.382480   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:47.383073   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:47.383090   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:47.383097   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:47.383103   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:47.385440   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:47.385456   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:47.385464   30564 round_trippers.go:580]     Audit-Id: 4757f4e0-2b6c-49a1-a52d-24aa421763c5
	I0626 20:10:47.385470   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:47.385476   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:47.385484   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:47.385495   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:47.385503   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:47 GMT
	I0626 20:10:47.385663   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:47.879315   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:47.879341   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:47.879352   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:47.879358   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:47.882112   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:47.882136   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:47.882146   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:47.882156   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:47.882164   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:47.882182   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:47.882191   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:47 GMT
	I0626 20:10:47.882199   30564 round_trippers.go:580]     Audit-Id: 03929968-aff4-4b9b-a61d-f2794619288b
	I0626 20:10:47.882328   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"734","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0626 20:10:47.882823   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:47.882836   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:47.882843   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:47.882849   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:47.884921   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:47.884936   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:47.884943   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:47.884949   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:47.884954   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:47.884961   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:47 GMT
	I0626 20:10:47.884972   30564 round_trippers.go:580]     Audit-Id: d95e47fd-409c-4ddc-8919-84d5aa8a983a
	I0626 20:10:47.884981   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:47.885145   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:48.379923   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:10:48.379949   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.379958   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.379964   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.383622   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:48.383654   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.383665   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.383675   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.383688   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.383696   30564 round_trippers.go:580]     Audit-Id: 0031f9f2-edbd-4d2b-86c0-fc10952347de
	I0626 20:10:48.383712   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.383720   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.384589   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"864","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0626 20:10:48.385076   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:48.385091   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.385098   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.385105   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.387569   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:48.387588   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.387595   30564 round_trippers.go:580]     Audit-Id: fe87c3c8-9e00-400d-8ef4-4c12ecd65602
	I0626 20:10:48.387601   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.387617   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.387631   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.387639   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.387650   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.387816   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:48.388216   30564 pod_ready.go:92] pod "kube-apiserver-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:48.388238   30564 pod_ready.go:81] duration metric: took 4.514799142s waiting for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.388251   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.388326   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-050558
	I0626 20:10:48.388337   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.388347   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.388359   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.392048   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:48.392065   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.392072   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.392078   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.392084   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.392097   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.392114   30564 round_trippers.go:580]     Audit-Id: b8480498-b346-4784-a222-0ace2a19d3ab
	I0626 20:10:48.392122   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.392279   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-050558","namespace":"kube-system","uid":"d90eb1a6-03bd-4bdf-b50d-9448cef0b578","resourceVersion":"831","creationTimestamp":"2023-06-26T20:00:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.mirror":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.seen":"2023-06-26T20:00:04.802665770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0626 20:10:48.392720   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:48.392733   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.392740   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.392746   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.394884   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:48.394906   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.394916   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.394924   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.394933   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.394953   30564 round_trippers.go:580]     Audit-Id: 19cb19e6-9672-4ff5-837f-5771b70c756f
	I0626 20:10:48.394962   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.394976   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.395231   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:48.395597   30564 pod_ready.go:92] pod "kube-controller-manager-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:48.395615   30564 pod_ready.go:81] duration metric: took 7.348274ms waiting for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.395625   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.395673   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:10:48.395681   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.395688   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.395695   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.397803   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:48.397823   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.397845   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.397855   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.397863   30564 round_trippers.go:580]     Audit-Id: 4371555f-4caf-4044-9ba8-94f577606e0a
	I0626 20:10:48.397874   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.397893   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.397902   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.398056   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-57pwt","generateName":"kube-proxy-","namespace":"kube-system","uid":"4611d3e6-962b-437a-8b38-387719e69da6","resourceVersion":"685","creationTimestamp":"2023-06-26T20:01:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:01:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0626 20:10:48.398487   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:10:48.398502   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.398509   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.398516   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.400526   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:48.400542   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.400552   30564 round_trippers.go:580]     Audit-Id: f0546a92-49d3-4774-93ea-8decb9eb4ee4
	I0626 20:10:48.400561   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.400569   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.400582   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.400591   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.400606   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.400723   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m03","uid":"0d94d9a3-b2d7-4a89-99ad-2d23c494ddb0","resourceVersion":"850","creationTimestamp":"2023-06-26T20:02:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0626 20:10:48.401020   30564 pod_ready.go:92] pod "kube-proxy-57pwt" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:48.401037   30564 pod_ready.go:81] duration metric: took 5.405926ms waiting for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.401048   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.438467   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:10:48.438489   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.438498   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.438504   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.441251   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:48.441272   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.441280   30564 round_trippers.go:580]     Audit-Id: f4876e44-48d6-43a0-8d00-61ea9976cd06
	I0626 20:10:48.441286   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.441291   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.441296   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.441302   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.441307   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.441426   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-67x99","generateName":"kube-proxy-","namespace":"kube-system","uid":"7ffa817a-1b4a-41a1-9a56-5c65849dc57e","resourceVersion":"744","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 20:10:48.638282   30564 request.go:628] Waited for 196.376324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:48.638357   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:48.638362   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.638369   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.638376   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.641262   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:48.641297   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.641307   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.641315   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.641323   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.641336   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.641349   30564 round_trippers.go:580]     Audit-Id: 2e12ee46-f547-40df-889d-bbbb43bf2e8e
	I0626 20:10:48.641357   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.641623   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:48.642085   30564 pod_ready.go:92] pod "kube-proxy-67x99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:48.642108   30564 pod_ready.go:81] duration metric: took 241.04962ms waiting for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.642120   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:48.838581   30564 request.go:628] Waited for 196.388033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:10:48.838742   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:10:48.838764   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:48.838776   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:48.838803   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:48.841819   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:48.841841   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:48.841851   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:48.841859   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:48.841868   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:48.841876   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:48 GMT
	I0626 20:10:48.841884   30564 round_trippers.go:580]     Audit-Id: 961330da-4867-4826-8277-db522d86bfce
	I0626 20:10:48.841893   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:48.842077   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wwg6x","generateName":"kube-proxy-","namespace":"kube-system","uid":"bdb04dda-dd36-45be-8f0e-7dad2bce1ef0","resourceVersion":"478","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0626 20:10:49.037990   30564 request.go:628] Waited for 195.445642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:10:49.038067   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:10:49.038074   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:49.038086   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:49.038096   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:49.040785   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:10:49.040826   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:49.040836   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:49.040843   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:49.040848   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:49.040855   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:49 GMT
	I0626 20:10:49.040860   30564 round_trippers.go:580]     Audit-Id: 542d9cc6-a970-41da-9ffb-9c0205af4715
	I0626 20:10:49.040866   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:49.040957   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8","resourceVersion":"710","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0626 20:10:49.041280   30564 pod_ready.go:92] pod "kube-proxy-wwg6x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:49.041296   30564 pod_ready.go:81] duration metric: took 399.166223ms waiting for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:49.041307   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:49.237695   30564 request.go:628] Waited for 196.312815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:10:49.237757   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:10:49.237762   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:49.237770   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:49.237778   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:49.241083   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:49.241105   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:49.241112   30564 round_trippers.go:580]     Audit-Id: 98fb79a0-b802-458f-ac45-17e0911b1c28
	I0626 20:10:49.241119   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:49.241128   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:49.241136   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:49.241146   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:49.241160   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:49 GMT
	I0626 20:10:49.241399   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-050558","namespace":"kube-system","uid":"1645e687-25f4-49b9-9d11-5f3db01fe7d2","resourceVersion":"848","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.mirror":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.seen":"2023-06-26T19:59:55.756274617Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0626 20:10:49.438282   30564 request.go:628] Waited for 196.421671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:49.438356   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:10:49.438365   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:49.438378   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:49.438389   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:49.441733   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:49.441759   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:49.441787   30564 round_trippers.go:580]     Audit-Id: a58c4ff9-41f6-49ef-8bc9-6b9c284ba216
	I0626 20:10:49.441797   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:49.441805   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:49.441814   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:49.441822   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:49.441829   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:49 GMT
	I0626 20:10:49.442088   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0626 20:10:49.442469   30564 pod_ready.go:92] pod "kube-scheduler-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:10:49.442489   30564 pod_ready.go:81] duration metric: took 401.173374ms waiting for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:10:49.442503   30564 pod_ready.go:38] duration metric: took 9.596979553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:10:49.442523   30564 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:10:49.442573   30564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:10:49.456928   30564 command_runner.go:130] > 1073
	I0626 20:10:49.456958   30564 api_server.go:72] duration metric: took 10.791132533s to wait for apiserver process to appear ...
	I0626 20:10:49.456967   30564 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:10:49.456981   30564 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:10:49.461952   30564 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0626 20:10:49.462037   30564 round_trippers.go:463] GET https://192.168.39.229:8443/version
	I0626 20:10:49.462045   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:49.462055   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:49.462070   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:49.463303   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:10:49.463319   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:49.463326   30564 round_trippers.go:580]     Content-Length: 263
	I0626 20:10:49.463332   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:49 GMT
	I0626 20:10:49.463338   30564 round_trippers.go:580]     Audit-Id: 22b3e936-c3ca-47a5-9df6-4cb883841614
	I0626 20:10:49.463343   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:49.463349   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:49.463354   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:49.463360   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:49.463395   30564 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0626 20:10:49.463445   30564 api_server.go:141] control plane version: v1.27.3
	I0626 20:10:49.463461   30564 api_server.go:131] duration metric: took 6.48884ms to wait for apiserver health ...
	I0626 20:10:49.463467   30564 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:10:49.637809   30564 request.go:628] Waited for 174.264276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:49.637874   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:49.637882   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:49.637894   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:49.637909   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:49.642678   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:49.642701   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:49.642709   30564 round_trippers.go:580]     Audit-Id: c35cea7b-47fb-4b60-aca1-a24dfaa0d9ba
	I0626 20:10:49.642715   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:49.642721   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:49.642729   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:49.642738   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:49.642750   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:49 GMT
	I0626 20:10:49.643510   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81889 chars]
	I0626 20:10:49.645965   30564 system_pods.go:59] 12 kube-system pods found
	I0626 20:10:49.645986   30564 system_pods.go:61] "coredns-5d78c9869d-5wffn" [c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5] Running
	I0626 20:10:49.645991   30564 system_pods.go:61] "etcd-multinode-050558" [457d2420-8ece-4b92-8281-7866fa6a884a] Running
	I0626 20:10:49.645998   30564 system_pods.go:61] "kindnet-9tprm" [23cc17d6-1401-413a-8f2f-71931b10ae4e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0626 20:10:49.646004   30564 system_pods.go:61] "kindnet-kmcqm" [ae3400e2-ef47-4a1a-ade8-dde5988de08e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0626 20:10:49.646008   30564 system_pods.go:61] "kindnet-vjpzs" [695a59a7-ddfd-4f5f-8084-86279daa17b6] Running
	I0626 20:10:49.646013   30564 system_pods.go:61] "kube-apiserver-multinode-050558" [00573436-b505-4be6-a86a-3ba9b74e1ad5] Running
	I0626 20:10:49.646018   30564 system_pods.go:61] "kube-controller-manager-multinode-050558" [d90eb1a6-03bd-4bdf-b50d-9448cef0b578] Running
	I0626 20:10:49.646022   30564 system_pods.go:61] "kube-proxy-57pwt" [4611d3e6-962b-437a-8b38-387719e69da6] Running
	I0626 20:10:49.646025   30564 system_pods.go:61] "kube-proxy-67x99" [7ffa817a-1b4a-41a1-9a56-5c65849dc57e] Running
	I0626 20:10:49.646029   30564 system_pods.go:61] "kube-proxy-wwg6x" [bdb04dda-dd36-45be-8f0e-7dad2bce1ef0] Running
	I0626 20:10:49.646033   30564 system_pods.go:61] "kube-scheduler-multinode-050558" [1645e687-25f4-49b9-9d11-5f3db01fe7d2] Running
	I0626 20:10:49.646037   30564 system_pods.go:61] "storage-provisioner" [fd433ce1-f37e-4168-930f-a93cd00821cb] Running
	I0626 20:10:49.646041   30564 system_pods.go:74] duration metric: took 182.57037ms to wait for pod list to return data ...
	I0626 20:10:49.646051   30564 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:10:49.838066   30564 request.go:628] Waited for 191.956733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/default/serviceaccounts
	I0626 20:10:49.838117   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/default/serviceaccounts
	I0626 20:10:49.838122   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:49.838131   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:49.838138   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:49.841419   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:10:49.841439   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:49.841446   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:49.841452   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:49.841458   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:49.841463   30564 round_trippers.go:580]     Content-Length: 261
	I0626 20:10:49.841471   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:49 GMT
	I0626 20:10:49.841486   30564 round_trippers.go:580]     Audit-Id: 806be722-2cd2-41af-b3dc-e12ca46cabf9
	I0626 20:10:49.841499   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:49.841526   30564 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"74e5f487-10bd-4618-86f4-85ee2fa9143f","resourceVersion":"318","creationTimestamp":"2023-06-26T20:00:16Z"}}]}
	I0626 20:10:49.841738   30564 default_sa.go:45] found service account: "default"
	I0626 20:10:49.841758   30564 default_sa.go:55] duration metric: took 195.701333ms for default service account to be created ...
	I0626 20:10:49.841768   30564 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:10:50.038258   30564 request.go:628] Waited for 196.431719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:50.038317   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:10:50.038324   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:50.038333   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:50.038341   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:50.046450   30564 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0626 20:10:50.046476   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:50.046487   30564 round_trippers.go:580]     Audit-Id: 934232e3-58bf-4498-aff5-2dc10dbeb37c
	I0626 20:10:50.046495   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:50.046502   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:50.046510   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:50.046519   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:50.046528   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:50 GMT
	I0626 20:10:50.047514   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81889 chars]
	I0626 20:10:50.049991   30564 system_pods.go:86] 12 kube-system pods found
	I0626 20:10:50.050016   30564 system_pods.go:89] "coredns-5d78c9869d-5wffn" [c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5] Running
	I0626 20:10:50.050024   30564 system_pods.go:89] "etcd-multinode-050558" [457d2420-8ece-4b92-8281-7866fa6a884a] Running
	I0626 20:10:50.050037   30564 system_pods.go:89] "kindnet-9tprm" [23cc17d6-1401-413a-8f2f-71931b10ae4e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0626 20:10:50.050047   30564 system_pods.go:89] "kindnet-kmcqm" [ae3400e2-ef47-4a1a-ade8-dde5988de08e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0626 20:10:50.050055   30564 system_pods.go:89] "kindnet-vjpzs" [695a59a7-ddfd-4f5f-8084-86279daa17b6] Running
	I0626 20:10:50.050063   30564 system_pods.go:89] "kube-apiserver-multinode-050558" [00573436-b505-4be6-a86a-3ba9b74e1ad5] Running
	I0626 20:10:50.050074   30564 system_pods.go:89] "kube-controller-manager-multinode-050558" [d90eb1a6-03bd-4bdf-b50d-9448cef0b578] Running
	I0626 20:10:50.050083   30564 system_pods.go:89] "kube-proxy-57pwt" [4611d3e6-962b-437a-8b38-387719e69da6] Running
	I0626 20:10:50.050100   30564 system_pods.go:89] "kube-proxy-67x99" [7ffa817a-1b4a-41a1-9a56-5c65849dc57e] Running
	I0626 20:10:50.050106   30564 system_pods.go:89] "kube-proxy-wwg6x" [bdb04dda-dd36-45be-8f0e-7dad2bce1ef0] Running
	I0626 20:10:50.050113   30564 system_pods.go:89] "kube-scheduler-multinode-050558" [1645e687-25f4-49b9-9d11-5f3db01fe7d2] Running
	I0626 20:10:50.050120   30564 system_pods.go:89] "storage-provisioner" [fd433ce1-f37e-4168-930f-a93cd00821cb] Running
	I0626 20:10:50.050132   30564 system_pods.go:126] duration metric: took 208.357492ms to wait for k8s-apps to be running ...
	I0626 20:10:50.050144   30564 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:10:50.050189   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:10:50.065212   30564 system_svc.go:56] duration metric: took 15.062381ms WaitForService to wait for kubelet.
	I0626 20:10:50.065232   30564 kubeadm.go:581] duration metric: took 11.399406319s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:10:50.065248   30564 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:10:50.237596   30564 request.go:628] Waited for 172.281148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0626 20:10:50.237661   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes
	I0626 20:10:50.237670   30564 round_trippers.go:469] Request Headers:
	I0626 20:10:50.237692   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:10:50.237700   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:10:50.242185   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:10:50.242205   30564 round_trippers.go:577] Response Headers:
	I0626 20:10:50.242212   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:10:50 GMT
	I0626 20:10:50.242221   30564 round_trippers.go:580]     Audit-Id: 9036a796-1fc8-409f-af94-fef4c2eecde2
	I0626 20:10:50.242226   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:10:50.242232   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:10:50.242237   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:10:50.242242   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:10:50.242382   30564 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"864"},"items":[{"metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"829","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I0626 20:10:50.242964   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:10:50.242982   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:10:50.243010   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:10:50.243015   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:10:50.243021   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:10:50.243030   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:10:50.243038   30564 node_conditions.go:105] duration metric: took 177.783114ms to run NodePressure ...
	I0626 20:10:50.243059   30564 start.go:228] waiting for startup goroutines ...
	I0626 20:10:50.243067   30564 start.go:233] waiting for cluster config update ...
	I0626 20:10:50.243074   30564 start.go:242] writing updated cluster config ...
	I0626 20:10:50.243543   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:10:50.243653   30564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:10:50.247422   30564 out.go:177] * Starting worker node multinode-050558-m02 in cluster multinode-050558
	I0626 20:10:50.248879   30564 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:10:50.248925   30564 cache.go:57] Caching tarball of preloaded images
	I0626 20:10:50.249061   30564 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:10:50.249074   30564 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:10:50.249220   30564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:10:50.255260   30564 start.go:365] acquiring machines lock for multinode-050558-m02: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:10:50.255354   30564 start.go:369] acquired machines lock for "multinode-050558-m02" in 44.168µs
	I0626 20:10:50.255373   30564 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:10:50.255387   30564 fix.go:54] fixHost starting: m02
	I0626 20:10:50.255708   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:10:50.255740   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:10:50.270938   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0626 20:10:50.271356   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:10:50.271876   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:10:50.271902   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:10:50.272226   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:10:50.272452   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:10:50.272609   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetState
	I0626 20:10:50.274450   30564 fix.go:102] recreateIfNeeded on multinode-050558-m02: state=Running err=<nil>
	W0626 20:10:50.274490   30564 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:10:50.276783   30564 out.go:177] * Updating the running kvm2 "multinode-050558-m02" VM ...
	I0626 20:10:50.278501   30564 machine.go:88] provisioning docker machine ...
	I0626 20:10:50.278527   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:10:50.278752   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:10:50.278922   30564 buildroot.go:166] provisioning hostname "multinode-050558-m02"
	I0626 20:10:50.278945   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:10:50.279110   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:10:50.281863   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.282407   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:10:50.282465   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.282589   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:10:50.282747   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:50.282902   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:50.283068   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:10:50.283257   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:50.283696   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:10:50.283711   30564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-050558-m02 && echo "multinode-050558-m02" | sudo tee /etc/hostname
	I0626 20:10:50.437080   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-050558-m02
	
	I0626 20:10:50.437107   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:10:50.439705   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.440080   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:10:50.440117   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.440241   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:10:50.440427   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:50.440597   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:50.440749   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:10:50.440883   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:50.441252   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:10:50.441271   30564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-050558-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-050558-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-050558-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:10:50.574676   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:10:50.574704   30564 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:10:50.574729   30564 buildroot.go:174] setting up certificates
	I0626 20:10:50.574738   30564 provision.go:83] configureAuth start
	I0626 20:10:50.574749   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetMachineName
	I0626 20:10:50.575067   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:10:50.577819   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.578193   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:10:50.578223   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.578424   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:10:50.580422   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.580782   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:10:50.580811   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.580955   30564 provision.go:138] copyHostCerts
	I0626 20:10:50.580984   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:10:50.581014   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:10:50.581024   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:10:50.581111   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:10:50.581208   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:10:50.581234   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:10:50.581241   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:10:50.581277   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:10:50.581344   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:10:50.581362   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:10:50.581368   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:10:50.581413   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:10:50.581467   30564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.multinode-050558-m02 san=[192.168.39.133 192.168.39.133 localhost 127.0.0.1 minikube multinode-050558-m02]
	I0626 20:10:50.883089   30564 provision.go:172] copyRemoteCerts
	I0626 20:10:50.883144   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:10:50.883165   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:10:50.885545   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.885858   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:10:50.885895   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:50.886096   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:10:50.886303   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:50.886457   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:10:50.886570   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:10:50.985812   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 20:10:50.985875   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:10:51.012478   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 20:10:51.012547   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0626 20:10:51.038883   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 20:10:51.038956   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:10:51.065761   30564 provision.go:86] duration metric: configureAuth took 491.009292ms
	I0626 20:10:51.065793   30564 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:10:51.066019   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:10:51.066102   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:10:51.069003   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:51.069500   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:10:51.069528   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:10:51.069747   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:10:51.069944   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:51.070154   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:10:51.070337   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:10:51.070539   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:10:51.070929   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:10:51.070947   30564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:12:21.550283   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:12:21.550308   30564 machine.go:91] provisioned docker machine in 1m31.271786786s
	I0626 20:12:21.550320   30564 start.go:300] post-start starting for "multinode-050558-m02" (driver="kvm2")
	I0626 20:12:21.550333   30564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:12:21.550361   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:12:21.550697   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:12:21.550730   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:12:21.553538   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.553860   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:12:21.553882   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.554126   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:12:21.554348   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:12:21.554521   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:12:21.554643   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:12:21.655295   30564 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:12:21.659692   30564 command_runner.go:130] > NAME=Buildroot
	I0626 20:12:21.659709   30564 command_runner.go:130] > VERSION=2021.02.12-1-ge2e95ab-dirty
	I0626 20:12:21.659713   30564 command_runner.go:130] > ID=buildroot
	I0626 20:12:21.659719   30564 command_runner.go:130] > VERSION_ID=2021.02.12
	I0626 20:12:21.659724   30564 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0626 20:12:21.659748   30564 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:12:21.659759   30564 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:12:21.659827   30564 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:12:21.659916   30564 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:12:21.659926   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /etc/ssl/certs/144432.pem
	I0626 20:12:21.660022   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:12:21.668107   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:12:21.691297   30564 start.go:303] post-start completed in 140.964604ms
	I0626 20:12:21.691317   30564 fix.go:56] fixHost completed within 1m31.435931257s
	I0626 20:12:21.691353   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:12:21.693725   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.694035   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:12:21.694067   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.694191   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:12:21.694430   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:12:21.694601   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:12:21.694730   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:12:21.694896   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:12:21.695266   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0626 20:12:21.695279   30564 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:12:21.830820   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687810341.821418195
	
	I0626 20:12:21.830841   30564 fix.go:206] guest clock: 1687810341.821418195
	I0626 20:12:21.830847   30564 fix.go:219] Guest: 2023-06-26 20:12:21.821418195 +0000 UTC Remote: 2023-06-26 20:12:21.691321399 +0000 UTC m=+450.105395807 (delta=130.096796ms)
	I0626 20:12:21.830870   30564 fix.go:190] guest clock delta is within tolerance: 130.096796ms
	I0626 20:12:21.830876   30564 start.go:83] releasing machines lock for "multinode-050558-m02", held for 1m31.575513446s
	I0626 20:12:21.830907   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:12:21.831179   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:12:21.834112   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.834584   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:12:21.834607   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.836809   30564 out.go:177] * Found network options:
	I0626 20:12:21.838442   30564 out.go:177]   - NO_PROXY=192.168.39.229
	W0626 20:12:21.839924   30564 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 20:12:21.839953   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:12:21.840567   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:12:21.840760   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:12:21.840814   30564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:12:21.840856   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	W0626 20:12:21.840948   30564 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 20:12:21.841027   30564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:12:21.841049   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:12:21.843729   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.843964   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.844080   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:12:21.844127   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.844346   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:12:21.844419   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:12:21.844449   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:21.844498   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:12:21.844583   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:12:21.844662   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:12:21.844743   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:12:21.844803   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:12:21.844846   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:12:21.844954   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:12:22.085833   30564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 20:12:22.085843   30564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 20:12:22.091860   30564 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0626 20:12:22.091962   30564 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:12:22.092035   30564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:12:22.099960   30564 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0626 20:12:22.099992   30564 start.go:466] detecting cgroup driver to use...
	I0626 20:12:22.100047   30564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:12:22.113450   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:12:22.126320   30564 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:12:22.126363   30564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:12:22.139342   30564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:12:22.152099   30564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:12:22.294843   30564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:12:22.430266   30564 docker.go:212] disabling docker service ...
	I0626 20:12:22.430343   30564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:12:22.446743   30564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:12:22.459139   30564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:12:22.588783   30564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:12:22.719114   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:12:22.731093   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:12:22.747720   30564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 20:12:22.748138   30564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:12:22.748188   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:12:22.757385   30564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:12:22.757448   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:12:22.766542   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:12:22.775684   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:12:22.784622   30564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:12:22.793870   30564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:12:22.801670   30564 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0626 20:12:22.801833   30564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:12:22.809606   30564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:12:22.935771   30564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:12:23.155731   30564 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:12:23.155786   30564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:12:23.160749   30564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 20:12:23.160796   30564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 20:12:23.160807   30564 command_runner.go:130] > Device: 16h/22d	Inode: 1233        Links: 1
	I0626 20:12:23.160817   30564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:12:23.160828   30564 command_runner.go:130] > Access: 2023-06-26 20:12:23.080132206 +0000
	I0626 20:12:23.160837   30564 command_runner.go:130] > Modify: 2023-06-26 20:12:23.080132206 +0000
	I0626 20:12:23.160848   30564 command_runner.go:130] > Change: 2023-06-26 20:12:23.080132206 +0000
	I0626 20:12:23.160855   30564 command_runner.go:130] >  Birth: -
	I0626 20:12:23.160881   30564 start.go:534] Will wait 60s for crictl version
	I0626 20:12:23.160928   30564 ssh_runner.go:195] Run: which crictl
	I0626 20:12:23.165188   30564 command_runner.go:130] > /usr/bin/crictl
	I0626 20:12:23.165277   30564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:12:23.201388   30564 command_runner.go:130] > Version:  0.1.0
	I0626 20:12:23.201414   30564 command_runner.go:130] > RuntimeName:  cri-o
	I0626 20:12:23.201422   30564 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0626 20:12:23.201431   30564 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0626 20:12:23.201451   30564 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:12:23.201513   30564 ssh_runner.go:195] Run: crio --version
	I0626 20:12:23.257521   30564 command_runner.go:130] > crio version 1.24.1
	I0626 20:12:23.257548   30564 command_runner.go:130] > Version:          1.24.1
	I0626 20:12:23.257559   30564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:12:23.257566   30564 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:12:23.257573   30564 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:12:23.257579   30564 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:12:23.257585   30564 command_runner.go:130] > Compiler:         gc
	I0626 20:12:23.257591   30564 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:12:23.257605   30564 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:12:23.257620   30564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:12:23.257628   30564 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:12:23.257636   30564 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:12:23.259116   30564 ssh_runner.go:195] Run: crio --version
	I0626 20:12:23.310892   30564 command_runner.go:130] > crio version 1.24.1
	I0626 20:12:23.310911   30564 command_runner.go:130] > Version:          1.24.1
	I0626 20:12:23.310918   30564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:12:23.310922   30564 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:12:23.310928   30564 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:12:23.310932   30564 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:12:23.310938   30564 command_runner.go:130] > Compiler:         gc
	I0626 20:12:23.310944   30564 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:12:23.310953   30564 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:12:23.310963   30564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:12:23.310969   30564 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:12:23.310975   30564 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:12:23.312859   30564 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:12:23.314422   30564 out.go:177]   - env NO_PROXY=192.168.39.229
	I0626 20:12:23.315851   30564 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:12:23.318717   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:23.319162   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:12:23.319192   30564 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:12:23.319454   30564 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:12:23.323982   30564 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0626 20:12:23.324020   30564 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558 for IP: 192.168.39.133
	I0626 20:12:23.324035   30564 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:12:23.324152   30564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:12:23.324189   30564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:12:23.324199   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 20:12:23.324212   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 20:12:23.324224   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 20:12:23.324238   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 20:12:23.324286   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:12:23.324313   30564 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:12:23.324323   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:12:23.324351   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:12:23.324383   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:12:23.324403   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:12:23.324439   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:12:23.324463   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:12:23.324482   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem -> /usr/share/ca-certificates/14443.pem
	I0626 20:12:23.324495   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /usr/share/ca-certificates/144432.pem
	I0626 20:12:23.324825   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:12:23.348723   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:12:23.372847   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:12:23.395552   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:12:23.418962   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:12:23.442791   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:12:23.466916   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:12:23.489823   30564 ssh_runner.go:195] Run: openssl version
	I0626 20:12:23.495606   30564 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0626 20:12:23.495685   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:12:23.506460   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:12:23.511215   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:12:23.511236   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:12:23.511267   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:12:23.517083   30564 command_runner.go:130] > b5213941
	I0626 20:12:23.517220   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:12:23.526921   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:12:23.539089   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:12:23.543689   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:12:23.543766   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:12:23.543823   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:12:23.549290   30564 command_runner.go:130] > 51391683
	I0626 20:12:23.549700   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:12:23.559428   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:12:23.573294   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:12:23.578161   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:12:23.578237   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:12:23.578294   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:12:23.584027   30564 command_runner.go:130] > 3ec20f2e
	I0626 20:12:23.584086   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:12:23.594867   30564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:12:23.598923   30564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 20:12:23.598963   30564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 20:12:23.599048   30564 ssh_runner.go:195] Run: crio config
	I0626 20:12:23.657255   30564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 20:12:23.657287   30564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 20:12:23.657298   30564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 20:12:23.657304   30564 command_runner.go:130] > #
	I0626 20:12:23.657316   30564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 20:12:23.657326   30564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 20:12:23.657336   30564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 20:12:23.657346   30564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 20:12:23.657352   30564 command_runner.go:130] > # reload'.
	I0626 20:12:23.657361   30564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 20:12:23.657391   30564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 20:12:23.657405   30564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 20:12:23.657414   30564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 20:12:23.657423   30564 command_runner.go:130] > [crio]
	I0626 20:12:23.657436   30564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 20:12:23.657448   30564 command_runner.go:130] > # containers images, in this directory.
	I0626 20:12:23.657460   30564 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0626 20:12:23.657479   30564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 20:12:23.657492   30564 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0626 20:12:23.657503   30564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 20:12:23.657512   30564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 20:12:23.657528   30564 command_runner.go:130] > storage_driver = "overlay"
	I0626 20:12:23.657539   30564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 20:12:23.657548   30564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 20:12:23.657555   30564 command_runner.go:130] > storage_option = [
	I0626 20:12:23.657562   30564 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0626 20:12:23.657571   30564 command_runner.go:130] > ]
	I0626 20:12:23.657582   30564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 20:12:23.657594   30564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 20:12:23.657605   30564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 20:12:23.657612   30564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 20:12:23.657622   30564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 20:12:23.657632   30564 command_runner.go:130] > # always happen on a node reboot
	I0626 20:12:23.657639   30564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 20:12:23.657649   30564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 20:12:23.657661   30564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 20:12:23.657674   30564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 20:12:23.657684   30564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 20:12:23.657700   30564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 20:12:23.657717   30564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 20:12:23.657727   30564 command_runner.go:130] > # internal_wipe = true
	I0626 20:12:23.657736   30564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 20:12:23.657748   30564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 20:12:23.657760   30564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 20:12:23.657769   30564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 20:12:23.657782   30564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 20:12:23.657791   30564 command_runner.go:130] > [crio.api]
	I0626 20:12:23.657800   30564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 20:12:23.657810   30564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 20:12:23.657821   30564 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 20:12:23.657832   30564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 20:12:23.657846   30564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 20:12:23.657856   30564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 20:12:23.657863   30564 command_runner.go:130] > # stream_port = "0"
	I0626 20:12:23.657874   30564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 20:12:23.657884   30564 command_runner.go:130] > # stream_enable_tls = false
	I0626 20:12:23.657896   30564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 20:12:23.657908   30564 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 20:12:23.657920   30564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 20:12:23.657932   30564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 20:12:23.657941   30564 command_runner.go:130] > # minutes.
	I0626 20:12:23.657947   30564 command_runner.go:130] > # stream_tls_cert = ""
	I0626 20:12:23.657960   30564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 20:12:23.657985   30564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 20:12:23.657994   30564 command_runner.go:130] > # stream_tls_key = ""
	I0626 20:12:23.658004   30564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 20:12:23.658016   30564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 20:12:23.658027   30564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 20:12:23.658037   30564 command_runner.go:130] > # stream_tls_ca = ""
	I0626 20:12:23.658048   30564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:12:23.658059   30564 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0626 20:12:23.658074   30564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:12:23.658085   30564 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0626 20:12:23.658150   30564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 20:12:23.658165   30564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 20:12:23.658174   30564 command_runner.go:130] > [crio.runtime]
	I0626 20:12:23.658184   30564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 20:12:23.658196   30564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 20:12:23.658206   30564 command_runner.go:130] > # "nofile=1024:2048"
	I0626 20:12:23.658220   30564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 20:12:23.658231   30564 command_runner.go:130] > # default_ulimits = [
	I0626 20:12:23.658238   30564 command_runner.go:130] > # ]
	I0626 20:12:23.658248   30564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 20:12:23.658257   30564 command_runner.go:130] > # no_pivot = false
	I0626 20:12:23.658266   30564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 20:12:23.658275   30564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 20:12:23.658286   30564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 20:12:23.658295   30564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 20:12:23.658306   30564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 20:12:23.658318   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:12:23.658329   30564 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0626 20:12:23.658338   30564 command_runner.go:130] > # Cgroup setting for conmon
	I0626 20:12:23.658349   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 20:12:23.658360   30564 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 20:12:23.658370   30564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 20:12:23.658381   30564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 20:12:23.658391   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:12:23.658400   30564 command_runner.go:130] > conmon_env = [
	I0626 20:12:23.658409   30564 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0626 20:12:23.658417   30564 command_runner.go:130] > ]
	I0626 20:12:23.658426   30564 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 20:12:23.658437   30564 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 20:12:23.658451   30564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 20:12:23.658462   30564 command_runner.go:130] > # default_env = [
	I0626 20:12:23.658472   30564 command_runner.go:130] > # ]
	I0626 20:12:23.658481   30564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 20:12:23.658489   30564 command_runner.go:130] > # selinux = false
	I0626 20:12:23.658499   30564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 20:12:23.658514   30564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 20:12:23.658526   30564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 20:12:23.658535   30564 command_runner.go:130] > # seccomp_profile = ""
	I0626 20:12:23.658544   30564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 20:12:23.658556   30564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 20:12:23.658569   30564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 20:12:23.658576   30564 command_runner.go:130] > # which might increase security.
	I0626 20:12:23.658587   30564 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0626 20:12:23.658600   30564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 20:12:23.658614   30564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 20:12:23.658627   30564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 20:12:23.658640   30564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 20:12:23.658652   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:12:23.658663   30564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 20:12:23.658675   30564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 20:12:23.658686   30564 command_runner.go:130] > # the cgroup blockio controller.
	I0626 20:12:23.658698   30564 command_runner.go:130] > # blockio_config_file = ""
	I0626 20:12:23.658713   30564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 20:12:23.658722   30564 command_runner.go:130] > # irqbalance daemon.
	I0626 20:12:23.658731   30564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 20:12:23.658743   30564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 20:12:23.658756   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:12:23.658763   30564 command_runner.go:130] > # rdt_config_file = ""
	I0626 20:12:23.658774   30564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 20:12:23.658784   30564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 20:12:23.658794   30564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 20:12:23.658804   30564 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 20:12:23.658816   30564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 20:12:23.658830   30564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 20:12:23.658839   30564 command_runner.go:130] > # will be added.
	I0626 20:12:23.658846   30564 command_runner.go:130] > # default_capabilities = [
	I0626 20:12:23.658855   30564 command_runner.go:130] > # 	"CHOWN",
	I0626 20:12:23.658861   30564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 20:12:23.658870   30564 command_runner.go:130] > # 	"FSETID",
	I0626 20:12:23.658876   30564 command_runner.go:130] > # 	"FOWNER",
	I0626 20:12:23.658886   30564 command_runner.go:130] > # 	"SETGID",
	I0626 20:12:23.658895   30564 command_runner.go:130] > # 	"SETUID",
	I0626 20:12:23.658902   30564 command_runner.go:130] > # 	"SETPCAP",
	I0626 20:12:23.658913   30564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 20:12:23.658923   30564 command_runner.go:130] > # 	"KILL",
	I0626 20:12:23.658930   30564 command_runner.go:130] > # ]
	I0626 20:12:23.658940   30564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 20:12:23.658952   30564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:12:23.658962   30564 command_runner.go:130] > # default_sysctls = [
	I0626 20:12:23.658966   30564 command_runner.go:130] > # ]
	I0626 20:12:23.658982   30564 command_runner.go:130] > # List of devices on the host that a
	I0626 20:12:23.658995   30564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 20:12:23.659005   30564 command_runner.go:130] > # allowed_devices = [
	I0626 20:12:23.659015   30564 command_runner.go:130] > # 	"/dev/fuse",
	I0626 20:12:23.659021   30564 command_runner.go:130] > # ]
	I0626 20:12:23.659029   30564 command_runner.go:130] > # List of additional devices. specified as
	I0626 20:12:23.659044   30564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 20:12:23.659055   30564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 20:12:23.659108   30564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:12:23.659120   30564 command_runner.go:130] > # additional_devices = [
	I0626 20:12:23.659126   30564 command_runner.go:130] > # ]
	I0626 20:12:23.659134   30564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 20:12:23.659154   30564 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 20:12:23.659160   30564 command_runner.go:130] > # 	"/etc/cdi",
	I0626 20:12:23.659166   30564 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 20:12:23.659171   30564 command_runner.go:130] > # ]
	I0626 20:12:23.659180   30564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 20:12:23.659190   30564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 20:12:23.659197   30564 command_runner.go:130] > # Defaults to false.
	I0626 20:12:23.659211   30564 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 20:12:23.659222   30564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 20:12:23.659234   30564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 20:12:23.659240   30564 command_runner.go:130] > # hooks_dir = [
	I0626 20:12:23.659247   30564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 20:12:23.659252   30564 command_runner.go:130] > # ]
	I0626 20:12:23.659263   30564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 20:12:23.659274   30564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 20:12:23.659287   30564 command_runner.go:130] > # its default mounts from the following two files:
	I0626 20:12:23.659293   30564 command_runner.go:130] > #
	I0626 20:12:23.659304   30564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 20:12:23.659319   30564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 20:12:23.659332   30564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 20:12:23.659340   30564 command_runner.go:130] > #
	I0626 20:12:23.659349   30564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 20:12:23.659363   30564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 20:12:23.659375   30564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 20:12:23.659388   30564 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 20:12:23.659396   30564 command_runner.go:130] > #
	I0626 20:12:23.659403   30564 command_runner.go:130] > # default_mounts_file = ""
	I0626 20:12:23.659414   30564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 20:12:23.659428   30564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 20:12:23.659439   30564 command_runner.go:130] > pids_limit = 1024
	I0626 20:12:23.659450   30564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 20:12:23.659462   30564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 20:12:23.659476   30564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 20:12:23.659492   30564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 20:12:23.659501   30564 command_runner.go:130] > # log_size_max = -1
	I0626 20:12:23.659513   30564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 20:12:23.659523   30564 command_runner.go:130] > # log_to_journald = false
	I0626 20:12:23.659533   30564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 20:12:23.659544   30564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 20:12:23.659553   30564 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 20:12:23.659564   30564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 20:12:23.659574   30564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 20:12:23.659583   30564 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 20:12:23.659591   30564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 20:12:23.659600   30564 command_runner.go:130] > # read_only = false
	I0626 20:12:23.659613   30564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 20:12:23.659628   30564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 20:12:23.659638   30564 command_runner.go:130] > # live configuration reload.
	I0626 20:12:23.659648   30564 command_runner.go:130] > # log_level = "info"
	I0626 20:12:23.659658   30564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 20:12:23.659669   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:12:23.659678   30564 command_runner.go:130] > # log_filter = ""
	I0626 20:12:23.659688   30564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 20:12:23.659700   30564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 20:12:23.659707   30564 command_runner.go:130] > # separated by comma.
	I0626 20:12:23.659716   30564 command_runner.go:130] > # uid_mappings = ""
	I0626 20:12:23.659726   30564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 20:12:23.659738   30564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 20:12:23.659745   30564 command_runner.go:130] > # separated by comma.
	I0626 20:12:23.659754   30564 command_runner.go:130] > # gid_mappings = ""
	I0626 20:12:23.659763   30564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 20:12:23.659775   30564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:12:23.659788   30564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:12:23.659797   30564 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 20:12:23.659807   30564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 20:12:23.659819   30564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:12:23.659832   30564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:12:23.659842   30564 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 20:12:23.659850   30564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 20:12:23.659863   30564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 20:12:23.659875   30564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 20:12:23.659884   30564 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 20:12:23.659894   30564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 20:12:23.659906   30564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 20:12:23.659915   30564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 20:12:23.659922   30564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 20:12:23.660008   30564 command_runner.go:130] > drop_infra_ctr = false
	I0626 20:12:23.660021   30564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 20:12:23.660026   30564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 20:12:23.660033   30564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 20:12:23.660037   30564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 20:12:23.660045   30564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 20:12:23.660050   30564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 20:12:23.660055   30564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 20:12:23.660062   30564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 20:12:23.660068   30564 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0626 20:12:23.660074   30564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 20:12:23.660082   30564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 20:12:23.660089   30564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 20:12:23.660095   30564 command_runner.go:130] > # default_runtime = "runc"
	I0626 20:12:23.660100   30564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 20:12:23.660109   30564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 20:12:23.660119   30564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 20:12:23.660129   30564 command_runner.go:130] > # creation as a file is not desired either.
	I0626 20:12:23.660137   30564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 20:12:23.660145   30564 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 20:12:23.660149   30564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 20:12:23.660154   30564 command_runner.go:130] > # ]
	I0626 20:12:23.660160   30564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 20:12:23.660171   30564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 20:12:23.660184   30564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 20:12:23.660196   30564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 20:12:23.660206   30564 command_runner.go:130] > #
	I0626 20:12:23.660213   30564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 20:12:23.660223   30564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 20:12:23.660232   30564 command_runner.go:130] > #  runtime_type = "oci"
	I0626 20:12:23.660241   30564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 20:12:23.660252   30564 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 20:12:23.660261   30564 command_runner.go:130] > #  allowed_annotations = []
	I0626 20:12:23.660265   30564 command_runner.go:130] > # Where:
	I0626 20:12:23.660270   30564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 20:12:23.660277   30564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 20:12:23.660283   30564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 20:12:23.660292   30564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 20:12:23.660295   30564 command_runner.go:130] > #   in $PATH.
	I0626 20:12:23.660301   30564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 20:12:23.660308   30564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 20:12:23.660314   30564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 20:12:23.660317   30564 command_runner.go:130] > #   state.
	I0626 20:12:23.660324   30564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 20:12:23.660331   30564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 20:12:23.660341   30564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 20:12:23.660353   30564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 20:12:23.660365   30564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 20:12:23.660378   30564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 20:12:23.660388   30564 command_runner.go:130] > #   The currently recognized values are:
	I0626 20:12:23.660402   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 20:12:23.660415   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 20:12:23.660426   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 20:12:23.660438   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 20:12:23.660452   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 20:12:23.660466   30564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 20:12:23.660478   30564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 20:12:23.660490   30564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 20:12:23.660498   30564 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 20:12:23.660503   30564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 20:12:23.660509   30564 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0626 20:12:23.660514   30564 command_runner.go:130] > runtime_type = "oci"
	I0626 20:12:23.660518   30564 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 20:12:23.660523   30564 command_runner.go:130] > runtime_config_path = ""
	I0626 20:12:23.660527   30564 command_runner.go:130] > monitor_path = ""
	I0626 20:12:23.660534   30564 command_runner.go:130] > monitor_cgroup = ""
	I0626 20:12:23.660538   30564 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 20:12:23.660544   30564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 20:12:23.660551   30564 command_runner.go:130] > # running containers
	I0626 20:12:23.660556   30564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 20:12:23.660563   30564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 20:12:23.660623   30564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 20:12:23.660634   30564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 20:12:23.660639   30564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 20:12:23.660644   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 20:12:23.660651   30564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 20:12:23.660655   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 20:12:23.660660   30564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 20:12:23.660667   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 20:12:23.660673   30564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 20:12:23.660680   30564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 20:12:23.660686   30564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 20:12:23.660701   30564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 20:12:23.660713   30564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 20:12:23.660725   30564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 20:12:23.660739   30564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 20:12:23.660753   30564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 20:12:23.660761   30564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 20:12:23.660768   30564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 20:12:23.660774   30564 command_runner.go:130] > # Example:
	I0626 20:12:23.660779   30564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 20:12:23.660786   30564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 20:12:23.660791   30564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 20:12:23.660798   30564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 20:12:23.660802   30564 command_runner.go:130] > # cpuset = 0
	I0626 20:12:23.660808   30564 command_runner.go:130] > # cpushares = "0-1"
	I0626 20:12:23.660812   30564 command_runner.go:130] > # Where:
	I0626 20:12:23.660817   30564 command_runner.go:130] > # The workload name is workload-type.
	I0626 20:12:23.660826   30564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 20:12:23.660831   30564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 20:12:23.660837   30564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 20:12:23.660846   30564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 20:12:23.660852   30564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 20:12:23.660856   30564 command_runner.go:130] > # 
	I0626 20:12:23.660863   30564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 20:12:23.660868   30564 command_runner.go:130] > #
	I0626 20:12:23.660874   30564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 20:12:23.660882   30564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 20:12:23.660889   30564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 20:12:23.660898   30564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 20:12:23.660906   30564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 20:12:23.660910   30564 command_runner.go:130] > [crio.image]
	I0626 20:12:23.660918   30564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 20:12:23.660922   30564 command_runner.go:130] > # default_transport = "docker://"
	I0626 20:12:23.660929   30564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 20:12:23.660937   30564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:12:23.660941   30564 command_runner.go:130] > # global_auth_file = ""
	I0626 20:12:23.660950   30564 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 20:12:23.660955   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:12:23.660962   30564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 20:12:23.660973   30564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 20:12:23.660981   30564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:12:23.660986   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:12:23.660991   30564 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 20:12:23.660997   30564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 20:12:23.661004   30564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 20:12:23.661011   30564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 20:12:23.661018   30564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 20:12:23.661023   30564 command_runner.go:130] > # pause_command = "/pause"
	I0626 20:12:23.661029   30564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 20:12:23.661037   30564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 20:12:23.661044   30564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 20:12:23.661052   30564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 20:12:23.661057   30564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 20:12:23.661064   30564 command_runner.go:130] > # signature_policy = ""
	I0626 20:12:23.661069   30564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 20:12:23.661077   30564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 20:12:23.661081   30564 command_runner.go:130] > # changing them here.
	I0626 20:12:23.661088   30564 command_runner.go:130] > # insecure_registries = [
	I0626 20:12:23.661092   30564 command_runner.go:130] > # ]
	I0626 20:12:23.661121   30564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 20:12:23.661129   30564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 20:12:23.661133   30564 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 20:12:23.661138   30564 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 20:12:23.661143   30564 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 20:12:23.661149   30564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 20:12:23.661156   30564 command_runner.go:130] > # CNI plugins.
	I0626 20:12:23.661159   30564 command_runner.go:130] > [crio.network]
	I0626 20:12:23.661168   30564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 20:12:23.661174   30564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 20:12:23.661180   30564 command_runner.go:130] > # cni_default_network = ""
	I0626 20:12:23.661185   30564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 20:12:23.661193   30564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 20:12:23.661198   30564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 20:12:23.661204   30564 command_runner.go:130] > # plugin_dirs = [
	I0626 20:12:23.661208   30564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 20:12:23.661214   30564 command_runner.go:130] > # ]
	I0626 20:12:23.661219   30564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 20:12:23.661225   30564 command_runner.go:130] > [crio.metrics]
	I0626 20:12:23.661230   30564 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 20:12:23.661235   30564 command_runner.go:130] > enable_metrics = true
	I0626 20:12:23.661242   30564 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 20:12:23.661247   30564 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 20:12:23.661255   30564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 20:12:23.661261   30564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 20:12:23.661267   30564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 20:12:23.661271   30564 command_runner.go:130] > # metrics_collectors = [
	I0626 20:12:23.661274   30564 command_runner.go:130] > # 	"operations",
	I0626 20:12:23.661279   30564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 20:12:23.661283   30564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 20:12:23.661289   30564 command_runner.go:130] > # 	"operations_errors",
	I0626 20:12:23.661293   30564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 20:12:23.661299   30564 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 20:12:23.661304   30564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 20:12:23.661308   30564 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 20:12:23.661312   30564 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 20:12:23.661318   30564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 20:12:23.661322   30564 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 20:12:23.661326   30564 command_runner.go:130] > # 	"containers_oom_total",
	I0626 20:12:23.661329   30564 command_runner.go:130] > # 	"containers_oom",
	I0626 20:12:23.661333   30564 command_runner.go:130] > # 	"processes_defunct",
	I0626 20:12:23.661339   30564 command_runner.go:130] > # 	"operations_total",
	I0626 20:12:23.661346   30564 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 20:12:23.661352   30564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 20:12:23.661359   30564 command_runner.go:130] > # 	"operations_errors_total",
	I0626 20:12:23.661365   30564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 20:12:23.661386   30564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 20:12:23.661395   30564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 20:12:23.661402   30564 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 20:12:23.661412   30564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 20:12:23.661419   30564 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 20:12:23.661427   30564 command_runner.go:130] > # ]
	I0626 20:12:23.661435   30564 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 20:12:23.661444   30564 command_runner.go:130] > # metrics_port = 9090
	I0626 20:12:23.661469   30564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 20:12:23.661481   30564 command_runner.go:130] > # metrics_socket = ""
	I0626 20:12:23.661493   30564 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 20:12:23.661503   30564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 20:12:23.661510   30564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 20:12:23.661517   30564 command_runner.go:130] > # certificate on any modification event.
	I0626 20:12:23.661521   30564 command_runner.go:130] > # metrics_cert = ""
	I0626 20:12:23.661527   30564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 20:12:23.661532   30564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 20:12:23.661542   30564 command_runner.go:130] > # metrics_key = ""
	I0626 20:12:23.661551   30564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 20:12:23.661560   30564 command_runner.go:130] > [crio.tracing]
	I0626 20:12:23.661568   30564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 20:12:23.661578   30564 command_runner.go:130] > # enable_tracing = false
	I0626 20:12:23.661589   30564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 20:12:23.661598   30564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 20:12:23.661610   30564 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 20:12:23.661618   30564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 20:12:23.661627   30564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 20:12:23.661634   30564 command_runner.go:130] > [crio.stats]
	I0626 20:12:23.661639   30564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 20:12:23.661647   30564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 20:12:23.661653   30564 command_runner.go:130] > # stats_collection_period = 0
	I0626 20:12:23.662078   30564 command_runner.go:130] ! time="2023-06-26 20:12:23.645072188Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0626 20:12:23.662104   30564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 20:12:23.662229   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:12:23.662245   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:12:23.662256   30564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:12:23.662279   30564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.133 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-050558 NodeName:multinode-050558-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:12:23.662436   30564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-050558-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:12:23.662503   30564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-050558-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:12:23.662562   30564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:12:23.672739   30564 command_runner.go:130] > kubeadm
	I0626 20:12:23.672757   30564 command_runner.go:130] > kubectl
	I0626 20:12:23.672765   30564 command_runner.go:130] > kubelet
	I0626 20:12:23.672793   30564 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:12:23.672840   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0626 20:12:23.681946   30564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0626 20:12:23.697323   30564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:12:23.713650   30564 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0626 20:12:23.717647   30564 command_runner.go:130] > 192.168.39.229	control-plane.minikube.internal
	I0626 20:12:23.717702   30564 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:12:23.717971   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:12:23.718004   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:12:23.718027   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:12:23.732548   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39233
	I0626 20:12:23.732897   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:12:23.733323   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:12:23.733343   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:12:23.733665   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:12:23.733821   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:12:23.733938   30564 start.go:301] JoinCluster: &{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:12:23.734040   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0626 20:12:23.734056   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:12:23.736514   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:12:23.736873   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:12:23.736920   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:12:23.737009   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:12:23.737162   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:12:23.737318   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:12:23.737471   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:12:23.923696   30564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9uuavw.hhdwd5awfotg84os --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:12:23.927595   30564 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:12:23.927631   30564 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:12:23.927961   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:12:23.927998   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:12:23.942621   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0626 20:12:23.943130   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:12:23.943669   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:12:23.943686   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:12:23.943984   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:12:23.944196   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:12:23.944406   30564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-050558-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0626 20:12:23.944436   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:12:23.947466   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:12:23.947931   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:12:23.947965   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:12:23.948119   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:12:23.948288   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:12:23.948453   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:12:23.948619   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:12:24.158987   30564 command_runner.go:130] > node/multinode-050558-m02 cordoned
	I0626 20:12:27.206761   30564 command_runner.go:130] > pod "busybox-67b7f59bb-z697w" has DeletionTimestamp older than 1 seconds, skipping
	I0626 20:12:27.206784   30564 command_runner.go:130] > node/multinode-050558-m02 drained
	I0626 20:12:27.209249   30564 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0626 20:12:27.209274   30564 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-kmcqm, kube-system/kube-proxy-wwg6x
	I0626 20:12:27.209468   30564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-050558-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.265028156s)
	I0626 20:12:27.209493   30564 node.go:108] successfully drained node "m02"
	I0626 20:12:27.209824   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:12:27.210051   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:12:27.210382   30564 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0626 20:12:27.210440   30564 round_trippers.go:463] DELETE https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:12:27.210451   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:27.210463   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:27.210474   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:27.210488   30564 round_trippers.go:473]     Content-Type: application/json
	I0626 20:12:27.224009   30564 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0626 20:12:27.224031   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:27.224040   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:27.224052   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:27.224060   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:27.224068   30564 round_trippers.go:580]     Content-Length: 171
	I0626 20:12:27.224079   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:27 GMT
	I0626 20:12:27.224088   30564 round_trippers.go:580]     Audit-Id: 62a0e30a-9ae9-40e4-b586-426ab555317f
	I0626 20:12:27.224098   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:27.224171   30564 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-050558-m02","kind":"nodes","uid":"534758b3-a740-496f-bd4f-646fcfbf55f8"}}
	I0626 20:12:27.224232   30564 node.go:124] successfully deleted node "m02"
	I0626 20:12:27.224245   30564 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:12:27.224270   30564 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:12:27.224288   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9uuavw.hhdwd5awfotg84os --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-050558-m02"
	I0626 20:12:27.271543   30564 command_runner.go:130] ! W0626 20:12:27.262182    2567 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0626 20:12:27.271576   30564 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0626 20:12:27.399760   30564 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0626 20:12:27.399796   30564 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0626 20:12:28.221154   30564 command_runner.go:130] > [preflight] Running pre-flight checks
	I0626 20:12:28.221180   30564 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0626 20:12:28.221193   30564 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0626 20:12:28.221205   30564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:12:28.221216   30564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:12:28.221223   30564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 20:12:28.221233   30564 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0626 20:12:28.221241   30564 command_runner.go:130] > This node has joined the cluster:
	I0626 20:12:28.221249   30564 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0626 20:12:28.221258   30564 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0626 20:12:28.221273   30564 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0626 20:12:28.221293   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0626 20:12:28.484551   30564 start.go:303] JoinCluster complete in 4.750604676s
	I0626 20:12:28.484579   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:12:28.484587   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:12:28.484650   30564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 20:12:28.490655   30564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 20:12:28.490677   30564 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0626 20:12:28.490685   30564 command_runner.go:130] > Device: 11h/17d	Inode: 3543        Links: 1
	I0626 20:12:28.490694   30564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:12:28.490701   30564 command_runner.go:130] > Access: 2023-06-26 20:10:02.269403478 +0000
	I0626 20:12:28.490708   30564 command_runner.go:130] > Modify: 2023-06-22 22:21:30.000000000 +0000
	I0626 20:12:28.490715   30564 command_runner.go:130] > Change: 2023-06-26 20:10:00.284403478 +0000
	I0626 20:12:28.490723   30564 command_runner.go:130] >  Birth: -
	I0626 20:12:28.490768   30564 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 20:12:28.490780   30564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 20:12:28.508115   30564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 20:12:29.045806   30564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:12:29.054722   30564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:12:29.058854   30564 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0626 20:12:29.076849   30564 command_runner.go:130] > daemonset.apps/kindnet configured
	I0626 20:12:29.080705   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:12:29.080971   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:12:29.081253   30564 round_trippers.go:463] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:12:29.081264   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.081275   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.081288   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.084327   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:12:29.084353   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.084360   30564 round_trippers.go:580]     Audit-Id: 73ed8e65-6ae3-46a5-b3a3-2e497294fdf6
	I0626 20:12:29.084366   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.084371   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.084382   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.084394   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.084408   30564 round_trippers.go:580]     Content-Length: 291
	I0626 20:12:29.084420   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.084444   30564 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"861","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0626 20:12:29.084524   30564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-050558" context rescaled to 1 replicas
	I0626 20:12:29.084548   30564 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 20:12:29.087315   30564 out.go:177] * Verifying Kubernetes components...
	I0626 20:12:29.088801   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:12:29.129299   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:12:29.129559   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:12:29.129792   30564 node_ready.go:35] waiting up to 6m0s for node "multinode-050558-m02" to be "Ready" ...
	I0626 20:12:29.129848   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:12:29.129857   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.129864   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.129870   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.133828   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:12:29.133852   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.133861   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.133869   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.133877   30564 round_trippers.go:580]     Audit-Id: 07213a39-19ff-47c6-b5a4-ae58b2ebcc94
	I0626 20:12:29.133885   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.133894   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.133906   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.134370   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"3b6f3e73-9c2f-495b-9525-5a38ba85fc78","resourceVersion":"1000","creationTimestamp":"2023-06-26T20:12:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:12:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I0626 20:12:29.134715   30564 node_ready.go:49] node "multinode-050558-m02" has status "Ready":"True"
	I0626 20:12:29.134730   30564 node_ready.go:38] duration metric: took 4.925281ms waiting for node "multinode-050558-m02" to be "Ready" ...
	I0626 20:12:29.134739   30564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:12:29.134790   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:12:29.134798   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.134805   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.134812   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.139528   30564 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 20:12:29.139549   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.139564   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.139574   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.139583   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.139592   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.139604   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.139613   30564 round_trippers.go:580]     Audit-Id: 36bac806-07b1-4f6b-9d0b-20797332c31e
	I0626 20:12:29.140711   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1012"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82249 chars]
	I0626 20:12:29.143779   30564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.143877   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:12:29.143889   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.143899   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.143908   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.146639   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.146658   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.146668   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.146677   30564 round_trippers.go:580]     Audit-Id: 75d3cf9d-3e91-4c93-b1d4-91f1952b5f23
	I0626 20:12:29.146686   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.146695   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.146704   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.146710   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.146891   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0626 20:12:29.147385   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:29.147401   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.147408   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.147414   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.149995   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.150010   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.150016   30564 round_trippers.go:580]     Audit-Id: c37405ca-8a7b-483a-a70c-bdb9cd18ad8a
	I0626 20:12:29.150022   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.150030   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.150039   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.150049   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.150060   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.150306   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:12:29.150589   30564 pod_ready.go:92] pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:29.150603   30564 pod_ready.go:81] duration metric: took 6.798626ms waiting for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.150610   30564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.150696   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:12:29.150708   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.150719   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.150730   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.153560   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.153579   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.153589   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.153598   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.153606   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.153615   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.153627   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.153636   30564 round_trippers.go:580]     Audit-Id: b1e2cc6b-8301-43bb-9993-b718fdb0dca6
	I0626 20:12:29.153807   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"832","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0626 20:12:29.154193   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:29.154209   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.154216   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.154223   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.156613   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.156630   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.156639   30564 round_trippers.go:580]     Audit-Id: 454a0d0f-cb0a-4d8d-a00d-76676b885f8b
	I0626 20:12:29.156647   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.156655   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.156664   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.156672   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.156685   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.157038   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:12:29.157307   30564 pod_ready.go:92] pod "etcd-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:29.157319   30564 pod_ready.go:81] duration metric: took 6.704237ms waiting for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.157332   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.157369   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:12:29.157386   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.157397   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.157410   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.160093   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.160111   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.160121   30564 round_trippers.go:580]     Audit-Id: 5a2eff82-b7ef-49c5-8ac3-4f1c595c4302
	I0626 20:12:29.160129   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.160137   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.160151   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.160160   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.160168   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.160360   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"864","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0626 20:12:29.160839   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:29.160858   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.160869   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.160879   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.164060   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:12:29.164078   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.164086   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.164094   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.164106   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.164116   30564 round_trippers.go:580]     Audit-Id: 70c4c222-5647-4938-b341-b5dc626110b7
	I0626 20:12:29.164126   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.164139   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.164278   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:12:29.164545   30564 pod_ready.go:92] pod "kube-apiserver-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:29.164557   30564 pod_ready.go:81] duration metric: took 7.220341ms waiting for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.164564   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.164602   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-050558
	I0626 20:12:29.164609   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.164616   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.164622   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.167108   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.167127   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.167137   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.167150   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.167158   30564 round_trippers.go:580]     Audit-Id: 55015938-cbfb-485c-80e3-1d69e9ab721d
	I0626 20:12:29.167167   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.167180   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.167193   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.167466   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-050558","namespace":"kube-system","uid":"d90eb1a6-03bd-4bdf-b50d-9448cef0b578","resourceVersion":"831","creationTimestamp":"2023-06-26T20:00:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.mirror":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.seen":"2023-06-26T20:00:04.802665770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0626 20:12:29.167908   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:29.167924   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.167935   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.167947   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.175396   30564 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0626 20:12:29.175413   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.175421   30564 round_trippers.go:580]     Audit-Id: b6961d75-da47-44e6-9481-f320a28bfad7
	I0626 20:12:29.175430   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.175438   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.175450   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.175462   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.175475   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.175650   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:12:29.176051   30564 pod_ready.go:92] pod "kube-controller-manager-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:29.176084   30564 pod_ready.go:81] duration metric: took 11.513179ms waiting for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.176097   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.330495   30564 request.go:628] Waited for 154.321126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:12:29.330556   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:12:29.330562   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.330573   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.330588   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.333526   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.333554   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.333564   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.333573   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.333581   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.333589   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.333598   30564 round_trippers.go:580]     Audit-Id: 86c7ef2c-55c9-4f6b-a6d5-e625e8cb6b94
	I0626 20:12:29.333610   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.334247   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-57pwt","generateName":"kube-proxy-","namespace":"kube-system","uid":"4611d3e6-962b-437a-8b38-387719e69da6","resourceVersion":"685","creationTimestamp":"2023-06-26T20:01:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:01:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0626 20:12:29.529970   30564 request.go:628] Waited for 195.301427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:12:29.530018   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:12:29.530022   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.530030   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.530036   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.532776   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:29.532794   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.532802   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.532807   30564 round_trippers.go:580]     Audit-Id: 8e1a6dc7-2c78-4b62-bc27-aa810e808246
	I0626 20:12:29.532813   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.532818   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.532823   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.532830   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.533058   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m03","uid":"0d94d9a3-b2d7-4a89-99ad-2d23c494ddb0","resourceVersion":"850","creationTimestamp":"2023-06-26T20:02:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0626 20:12:29.533328   30564 pod_ready.go:92] pod "kube-proxy-57pwt" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:29.533341   30564 pod_ready.go:81] duration metric: took 357.232849ms waiting for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.533349   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.730874   30564 request.go:628] Waited for 197.450148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:12:29.730930   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:12:29.730935   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.730953   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.730963   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.734510   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:12:29.734533   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.734543   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.734550   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.734558   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.734565   30564 round_trippers.go:580]     Audit-Id: 0b647457-9579-44e9-a4a0-e39105533f1e
	I0626 20:12:29.734574   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.734587   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.735300   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-67x99","generateName":"kube-proxy-","namespace":"kube-system","uid":"7ffa817a-1b4a-41a1-9a56-5c65849dc57e","resourceVersion":"744","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 20:12:29.930007   30564 request.go:628] Waited for 194.264847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:29.930053   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:29.930058   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:29.930066   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:29.930073   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:29.933366   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:12:29.933405   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:29.933417   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:29 GMT
	I0626 20:12:29.933427   30564 round_trippers.go:580]     Audit-Id: 2a6799cb-3ee4-4d83-ac9a-8a5576d6501f
	I0626 20:12:29.933436   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:29.933446   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:29.933455   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:29.933464   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:29.933593   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:12:29.933924   30564 pod_ready.go:92] pod "kube-proxy-67x99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:29.933939   30564 pod_ready.go:81] duration metric: took 400.584009ms waiting for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:29.933947   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:30.130430   30564 request.go:628] Waited for 196.414433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:12:30.130500   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:12:30.130506   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:30.130513   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:30.130521   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:30.133466   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:30.133485   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:30.133492   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:30.133498   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:30.133503   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:30 GMT
	I0626 20:12:30.133508   30564 round_trippers.go:580]     Audit-Id: d458ca4b-e4c4-4501-bad2-758520ebf527
	I0626 20:12:30.133513   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:30.133518   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:30.133790   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wwg6x","generateName":"kube-proxy-","namespace":"kube-system","uid":"bdb04dda-dd36-45be-8f0e-7dad2bce1ef0","resourceVersion":"1018","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0626 20:12:30.330465   30564 request.go:628] Waited for 196.251135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:12:30.330519   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:12:30.330524   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:30.330531   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:30.330537   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:30.333335   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:30.333358   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:30.333368   30564 round_trippers.go:580]     Audit-Id: 96c40309-e972-455d-8fa9-dfa89a38a73e
	I0626 20:12:30.333395   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:30.333404   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:30.333413   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:30.333423   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:30.333433   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:30 GMT
	I0626 20:12:30.333541   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"3b6f3e73-9c2f-495b-9525-5a38ba85fc78","resourceVersion":"1000","creationTimestamp":"2023-06-26T20:12:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:12:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I0626 20:12:30.333828   30564 pod_ready.go:92] pod "kube-proxy-wwg6x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:30.333848   30564 pod_ready.go:81] duration metric: took 399.894158ms waiting for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:30.333860   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:30.530292   30564 request.go:628] Waited for 196.366986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:12:30.530355   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:12:30.530360   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:30.530370   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:30.530379   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:30.533792   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:12:30.533819   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:30.533829   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:30 GMT
	I0626 20:12:30.533837   30564 round_trippers.go:580]     Audit-Id: 1fddb760-4a05-461f-a5f7-32806fbff8c6
	I0626 20:12:30.533846   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:30.533854   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:30.533863   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:30.533872   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:30.534474   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-050558","namespace":"kube-system","uid":"1645e687-25f4-49b9-9d11-5f3db01fe7d2","resourceVersion":"848","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.mirror":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.seen":"2023-06-26T19:59:55.756274617Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0626 20:12:30.730093   30564 request.go:628] Waited for 195.279606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:30.730154   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:12:30.730159   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:30.730166   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:30.730172   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:30.732754   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:30.732772   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:30.732778   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:30.732784   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:30.732789   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:30.732795   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:30 GMT
	I0626 20:12:30.732799   30564 round_trippers.go:580]     Audit-Id: c1da61c3-71c1-46cc-81f9-456b23f23370
	I0626 20:12:30.732805   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:30.733111   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:12:30.733494   30564 pod_ready.go:92] pod "kube-scheduler-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:12:30.733509   30564 pod_ready.go:81] duration metric: took 399.64195ms waiting for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:12:30.733520   30564 pod_ready.go:38] duration metric: took 1.598772012s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:12:30.733531   30564 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:12:30.733570   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:12:30.747844   30564 system_svc.go:56] duration metric: took 14.301651ms WaitForService to wait for kubelet.
	I0626 20:12:30.747873   30564 kubeadm.go:581] duration metric: took 1.663304428s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:12:30.747890   30564 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:12:30.930300   30564 request.go:628] Waited for 182.346439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0626 20:12:30.930353   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes
	I0626 20:12:30.930359   30564 round_trippers.go:469] Request Headers:
	I0626 20:12:30.930377   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:12:30.930386   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:12:30.933395   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:12:30.933415   30564 round_trippers.go:577] Response Headers:
	I0626 20:12:30.933423   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:12:30.933431   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:12:30 GMT
	I0626 20:12:30.933439   30564 round_trippers.go:580]     Audit-Id: 60ebc1c0-f316-4930-bef5-aa096efd0d78
	I0626 20:12:30.933451   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:12:30.933463   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:12:30.933480   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:12:30.934239   30564 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1021"},"items":[{"metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15106 chars]
	I0626 20:12:30.934764   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:12:30.934780   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:12:30.934789   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:12:30.934801   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:12:30.934805   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:12:30.934808   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:12:30.934815   30564 node_conditions.go:105] duration metric: took 186.921455ms to run NodePressure ...
	I0626 20:12:30.934824   30564 start.go:228] waiting for startup goroutines ...
	I0626 20:12:30.934847   30564 start.go:242] writing updated cluster config ...
	I0626 20:12:30.935247   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:12:30.935366   30564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:12:30.938721   30564 out.go:177] * Starting worker node multinode-050558-m03 in cluster multinode-050558
	I0626 20:12:30.940128   30564 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:12:30.940147   30564 cache.go:57] Caching tarball of preloaded images
	I0626 20:12:30.940239   30564 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:12:30.940251   30564 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:12:30.940361   30564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/config.json ...
	I0626 20:12:30.940556   30564 start.go:365] acquiring machines lock for multinode-050558-m03: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:12:30.940602   30564 start.go:369] acquired machines lock for "multinode-050558-m03" in 27.237µs
	I0626 20:12:30.940621   30564 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:12:30.940630   30564 fix.go:54] fixHost starting: m03
	I0626 20:12:30.940899   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:12:30.940920   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:12:30.955514   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0626 20:12:30.955909   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:12:30.956416   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:12:30.956434   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:12:30.956746   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:12:30.956945   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:12:30.957087   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetState
	I0626 20:12:30.958522   30564 fix.go:102] recreateIfNeeded on multinode-050558-m03: state=Running err=<nil>
	W0626 20:12:30.958538   30564 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:12:30.960714   30564 out.go:177] * Updating the running kvm2 "multinode-050558-m03" VM ...
	I0626 20:12:30.962174   30564 machine.go:88] provisioning docker machine ...
	I0626 20:12:30.962196   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:12:30.962423   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetMachineName
	I0626 20:12:30.962594   30564 buildroot.go:166] provisioning hostname "multinode-050558-m03"
	I0626 20:12:30.962611   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetMachineName
	I0626 20:12:30.962789   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:12:30.965064   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:30.965501   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:12:30.965534   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:30.965681   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:12:30.965872   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:30.966030   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:30.966163   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:12:30.966322   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:12:30.966706   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0626 20:12:30.966722   30564 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-050558-m03 && echo "multinode-050558-m03" | sudo tee /etc/hostname
	I0626 20:12:31.103654   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-050558-m03
	
	I0626 20:12:31.103678   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:12:31.106263   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.106677   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:12:31.106709   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.106862   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:12:31.107072   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:31.107218   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:31.107379   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:12:31.107544   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:12:31.108117   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0626 20:12:31.108144   30564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-050558-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-050558-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-050558-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:12:31.222197   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:12:31.222229   30564 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:12:31.222253   30564 buildroot.go:174] setting up certificates
	I0626 20:12:31.222261   30564 provision.go:83] configureAuth start
	I0626 20:12:31.222274   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetMachineName
	I0626 20:12:31.222547   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetIP
	I0626 20:12:31.225430   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.225870   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:12:31.225913   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.226042   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:12:31.228377   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.228707   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:12:31.228733   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.228891   30564 provision.go:138] copyHostCerts
	I0626 20:12:31.228939   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:12:31.228980   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:12:31.228992   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:12:31.229065   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:12:31.229139   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:12:31.229157   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:12:31.229171   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:12:31.229216   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:12:31.229305   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:12:31.229333   30564 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:12:31.229340   30564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:12:31.229394   30564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:12:31.229466   30564 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.multinode-050558-m03 san=[192.168.39.231 192.168.39.231 localhost 127.0.0.1 minikube multinode-050558-m03]
	I0626 20:12:31.324818   30564 provision.go:172] copyRemoteCerts
	I0626 20:12:31.324881   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:12:31.324909   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:12:31.327751   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.328138   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:12:31.328182   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.328350   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:12:31.328545   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:31.328712   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:12:31.328845   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m03/id_rsa Username:docker}
	I0626 20:12:31.415316   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 20:12:31.415402   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:12:31.439188   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 20:12:31.439267   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0626 20:12:31.463136   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 20:12:31.463217   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:12:31.485842   30564 provision.go:86] duration metric: configureAuth took 263.570846ms
	I0626 20:12:31.485865   30564 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:12:31.486056   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:12:31.486122   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:12:31.489226   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.489640   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:12:31.489667   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:12:31.489826   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:12:31.490033   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:31.490200   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:12:31.490368   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:12:31.490566   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:12:31.490996   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0626 20:12:31.491018   30564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:14:02.192174   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:14:02.192196   30564 machine.go:91] provisioned docker machine in 1m31.230006743s
	I0626 20:14:02.192207   30564 start.go:300] post-start starting for "multinode-050558-m03" (driver="kvm2")
	I0626 20:14:02.192216   30564 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:14:02.192236   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:14:02.192564   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:14:02.192597   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:14:02.195659   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.196062   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:14:02.196096   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.196281   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:14:02.196472   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:14:02.196658   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:14:02.196790   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m03/id_rsa Username:docker}
	I0626 20:14:02.289783   30564 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:14:02.294118   30564 command_runner.go:130] > NAME=Buildroot
	I0626 20:14:02.294136   30564 command_runner.go:130] > VERSION=2021.02.12-1-ge2e95ab-dirty
	I0626 20:14:02.294145   30564 command_runner.go:130] > ID=buildroot
	I0626 20:14:02.294151   30564 command_runner.go:130] > VERSION_ID=2021.02.12
	I0626 20:14:02.294155   30564 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0626 20:14:02.294179   30564 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:14:02.294187   30564 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:14:02.294277   30564 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:14:02.294353   30564 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:14:02.294362   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /etc/ssl/certs/144432.pem
	I0626 20:14:02.294434   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:14:02.304706   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:14:02.326586   30564 start.go:303] post-start completed in 134.365703ms
	I0626 20:14:02.326608   30564 fix.go:56] fixHost completed within 1m31.385978322s
	I0626 20:14:02.326627   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:14:02.329648   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.330027   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:14:02.330055   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.330186   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:14:02.330382   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:14:02.330578   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:14:02.330710   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:14:02.330890   30564 main.go:141] libmachine: Using SSH client type: native
	I0626 20:14:02.331321   30564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0626 20:14:02.331336   30564 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:14:02.446228   30564 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687810442.440244250
	
	I0626 20:14:02.446253   30564 fix.go:206] guest clock: 1687810442.440244250
	I0626 20:14:02.446260   30564 fix.go:219] Guest: 2023-06-26 20:14:02.44024425 +0000 UTC Remote: 2023-06-26 20:14:02.326611694 +0000 UTC m=+550.740686091 (delta=113.632556ms)
	I0626 20:14:02.446280   30564 fix.go:190] guest clock delta is within tolerance: 113.632556ms
	I0626 20:14:02.446286   30564 start.go:83] releasing machines lock for "multinode-050558-m03", held for 1m31.505671992s
	I0626 20:14:02.446311   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:14:02.446598   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetIP
	I0626 20:14:02.449111   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.449519   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:14:02.449544   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.451618   30564 out.go:177] * Found network options:
	I0626 20:14:02.453158   30564 out.go:177]   - NO_PROXY=192.168.39.229,192.168.39.133
	W0626 20:14:02.454694   30564 proxy.go:119] fail to check proxy env: Error ip not in block
	W0626 20:14:02.454715   30564 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 20:14:02.454769   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:14:02.455312   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:14:02.455480   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .DriverName
	I0626 20:14:02.455580   30564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:14:02.455617   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	W0626 20:14:02.455675   30564 proxy.go:119] fail to check proxy env: Error ip not in block
	W0626 20:14:02.455698   30564 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 20:14:02.455764   30564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:14:02.455786   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHHostname
	I0626 20:14:02.458496   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.458761   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.458922   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:14:02.458953   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.459104   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:14:02.459251   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:14:02.459274   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:02.459286   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:14:02.459411   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHPort
	I0626 20:14:02.459595   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHKeyPath
	I0626 20:14:02.459597   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:14:02.459751   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetSSHUsername
	I0626 20:14:02.459745   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m03/id_rsa Username:docker}
	I0626 20:14:02.459904   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m03/id_rsa Username:docker}
	I0626 20:14:02.700197   30564 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 20:14:02.700261   30564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 20:14:02.707635   30564 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0626 20:14:02.708042   30564 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:14:02.708108   30564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:14:02.717138   30564 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0626 20:14:02.717156   30564 start.go:466] detecting cgroup driver to use...
	I0626 20:14:02.717206   30564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:14:02.732324   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:14:02.744990   30564 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:14:02.745037   30564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:14:02.760068   30564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:14:02.773195   30564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:14:02.915793   30564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:14:03.061005   30564 docker.go:212] disabling docker service ...
	I0626 20:14:03.061061   30564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:14:03.077012   30564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:14:03.089971   30564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:14:03.234962   30564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:14:03.373781   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:14:03.387419   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:14:03.404902   30564 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 20:14:03.405271   30564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:14:03.405332   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:14:03.415784   30564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:14:03.415847   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:14:03.426381   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:14:03.435652   30564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:14:03.445813   30564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:14:03.456219   30564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:14:03.465527   30564 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0626 20:14:03.465589   30564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:14:03.474873   30564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:14:03.603482   30564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:14:03.819295   30564 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:14:03.819427   30564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:14:03.824055   30564 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 20:14:03.824079   30564 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 20:14:03.824089   30564 command_runner.go:130] > Device: 16h/22d	Inode: 1216        Links: 1
	I0626 20:14:03.824100   30564 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:14:03.824107   30564 command_runner.go:130] > Access: 2023-06-26 20:14:03.747982459 +0000
	I0626 20:14:03.824116   30564 command_runner.go:130] > Modify: 2023-06-26 20:14:03.747982459 +0000
	I0626 20:14:03.824124   30564 command_runner.go:130] > Change: 2023-06-26 20:14:03.747982459 +0000
	I0626 20:14:03.824130   30564 command_runner.go:130] >  Birth: -
	I0626 20:14:03.824170   30564 start.go:534] Will wait 60s for crictl version
	I0626 20:14:03.824219   30564 ssh_runner.go:195] Run: which crictl
	I0626 20:14:03.827795   30564 command_runner.go:130] > /usr/bin/crictl
	I0626 20:14:03.827860   30564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:14:03.862796   30564 command_runner.go:130] > Version:  0.1.0
	I0626 20:14:03.862820   30564 command_runner.go:130] > RuntimeName:  cri-o
	I0626 20:14:03.862830   30564 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0626 20:14:03.862835   30564 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0626 20:14:03.863752   30564 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:14:03.863814   30564 ssh_runner.go:195] Run: crio --version
	I0626 20:14:03.912943   30564 command_runner.go:130] > crio version 1.24.1
	I0626 20:14:03.912961   30564 command_runner.go:130] > Version:          1.24.1
	I0626 20:14:03.912968   30564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:14:03.912972   30564 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:14:03.912977   30564 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:14:03.912982   30564 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:14:03.912986   30564 command_runner.go:130] > Compiler:         gc
	I0626 20:14:03.912990   30564 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:14:03.912995   30564 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:14:03.913002   30564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:14:03.913006   30564 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:14:03.913010   30564 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:14:03.914360   30564 ssh_runner.go:195] Run: crio --version
	I0626 20:14:03.960230   30564 command_runner.go:130] > crio version 1.24.1
	I0626 20:14:03.960249   30564 command_runner.go:130] > Version:          1.24.1
	I0626 20:14:03.960256   30564 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0626 20:14:03.960260   30564 command_runner.go:130] > GitTreeState:     dirty
	I0626 20:14:03.960266   30564 command_runner.go:130] > BuildDate:        2023-06-22T22:07:45Z
	I0626 20:14:03.960271   30564 command_runner.go:130] > GoVersion:        go1.19.9
	I0626 20:14:03.960275   30564 command_runner.go:130] > Compiler:         gc
	I0626 20:14:03.960279   30564 command_runner.go:130] > Platform:         linux/amd64
	I0626 20:14:03.960284   30564 command_runner.go:130] > Linkmode:         dynamic
	I0626 20:14:03.960291   30564 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 20:14:03.960295   30564 command_runner.go:130] > SeccompEnabled:   true
	I0626 20:14:03.960299   30564 command_runner.go:130] > AppArmorEnabled:  false
	I0626 20:14:03.963787   30564 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:14:03.965219   30564 out.go:177]   - env NO_PROXY=192.168.39.229
	I0626 20:14:03.966553   30564 out.go:177]   - env NO_PROXY=192.168.39.229,192.168.39.133
	I0626 20:14:03.967814   30564 main.go:141] libmachine: (multinode-050558-m03) Calling .GetIP
	I0626 20:14:03.970178   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:03.970560   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:99:ad", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:01:36 +0000 UTC Type:0 Mac:52:54:00:7f:99:ad Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:multinode-050558-m03 Clientid:01:52:54:00:7f:99:ad}
	I0626 20:14:03.970593   30564 main.go:141] libmachine: (multinode-050558-m03) DBG | domain multinode-050558-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:7f:99:ad in network mk-multinode-050558
	I0626 20:14:03.970753   30564 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:14:03.974802   30564 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0626 20:14:03.974849   30564 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558 for IP: 192.168.39.231
	I0626 20:14:03.974882   30564 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:14:03.975087   30564 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:14:03.975167   30564 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:14:03.975186   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 20:14:03.975203   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 20:14:03.975226   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 20:14:03.975242   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 20:14:03.975328   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:14:03.975370   30564 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:14:03.975383   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:14:03.975423   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:14:03.975455   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:14:03.975490   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:14:03.975539   30564 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:14:03.975582   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:14:03.975602   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem -> /usr/share/ca-certificates/14443.pem
	I0626 20:14:03.975618   30564 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> /usr/share/ca-certificates/144432.pem
	I0626 20:14:03.976043   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:14:03.999147   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:14:04.020548   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:14:04.044167   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:14:04.065756   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:14:04.088063   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:14:04.108837   30564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:14:04.129795   30564 ssh_runner.go:195] Run: openssl version
	I0626 20:14:04.142161   30564 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0626 20:14:04.142401   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:14:04.152404   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:14:04.156805   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:14:04.156848   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:14:04.156886   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:14:04.162140   30564 command_runner.go:130] > b5213941
	I0626 20:14:04.162337   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:14:04.174225   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:14:04.187312   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:14:04.191924   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:14:04.191948   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:14:04.191989   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:14:04.197546   30564 command_runner.go:130] > 51391683
	I0626 20:14:04.197595   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:14:04.208025   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:14:04.219289   30564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:14:04.223787   30564 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:14:04.224042   30564 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:14:04.224083   30564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:14:04.229058   30564 command_runner.go:130] > 3ec20f2e
	I0626 20:14:04.229324   30564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:14:04.238742   30564 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:14:04.243005   30564 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 20:14:04.243046   30564 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 20:14:04.243141   30564 ssh_runner.go:195] Run: crio config
	I0626 20:14:04.297192   30564 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 20:14:04.297221   30564 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 20:14:04.297232   30564 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 20:14:04.297236   30564 command_runner.go:130] > #
	I0626 20:14:04.297243   30564 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 20:14:04.297260   30564 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 20:14:04.297275   30564 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 20:14:04.297297   30564 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 20:14:04.297307   30564 command_runner.go:130] > # reload'.
	I0626 20:14:04.297316   30564 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 20:14:04.297325   30564 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 20:14:04.297334   30564 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 20:14:04.297342   30564 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 20:14:04.297348   30564 command_runner.go:130] > [crio]
	I0626 20:14:04.297353   30564 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 20:14:04.297364   30564 command_runner.go:130] > # containers images, in this directory.
	I0626 20:14:04.297388   30564 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0626 20:14:04.297404   30564 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 20:14:04.297412   30564 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0626 20:14:04.297425   30564 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 20:14:04.297434   30564 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 20:14:04.297466   30564 command_runner.go:130] > storage_driver = "overlay"
	I0626 20:14:04.297481   30564 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 20:14:04.297495   30564 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 20:14:04.297505   30564 command_runner.go:130] > storage_option = [
	I0626 20:14:04.297513   30564 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0626 20:14:04.297523   30564 command_runner.go:130] > ]
	I0626 20:14:04.297533   30564 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 20:14:04.297543   30564 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 20:14:04.297551   30564 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 20:14:04.297564   30564 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 20:14:04.297573   30564 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 20:14:04.297583   30564 command_runner.go:130] > # always happen on a node reboot
	I0626 20:14:04.297593   30564 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 20:14:04.297601   30564 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 20:14:04.297615   30564 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 20:14:04.297631   30564 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 20:14:04.297641   30564 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 20:14:04.297657   30564 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 20:14:04.297673   30564 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 20:14:04.297700   30564 command_runner.go:130] > # internal_wipe = true
	I0626 20:14:04.297713   30564 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 20:14:04.297727   30564 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 20:14:04.297739   30564 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 20:14:04.297751   30564 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 20:14:04.297764   30564 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 20:14:04.297773   30564 command_runner.go:130] > [crio.api]
	I0626 20:14:04.297784   30564 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 20:14:04.297791   30564 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 20:14:04.297800   30564 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 20:14:04.297811   30564 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 20:14:04.297825   30564 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 20:14:04.297837   30564 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 20:14:04.297846   30564 command_runner.go:130] > # stream_port = "0"
	I0626 20:14:04.297855   30564 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 20:14:04.297865   30564 command_runner.go:130] > # stream_enable_tls = false
	I0626 20:14:04.297877   30564 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 20:14:04.297887   30564 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 20:14:04.297898   30564 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 20:14:04.297914   30564 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 20:14:04.297923   30564 command_runner.go:130] > # minutes.
	I0626 20:14:04.297935   30564 command_runner.go:130] > # stream_tls_cert = ""
	I0626 20:14:04.297949   30564 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 20:14:04.297962   30564 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 20:14:04.297975   30564 command_runner.go:130] > # stream_tls_key = ""
	I0626 20:14:04.297988   30564 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 20:14:04.298007   30564 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 20:14:04.298019   30564 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 20:14:04.298046   30564 command_runner.go:130] > # stream_tls_ca = ""
	I0626 20:14:04.298062   30564 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:14:04.298073   30564 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0626 20:14:04.298088   30564 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 20:14:04.298099   30564 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0626 20:14:04.298120   30564 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 20:14:04.298132   30564 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 20:14:04.298142   30564 command_runner.go:130] > [crio.runtime]
	I0626 20:14:04.298153   30564 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 20:14:04.298166   30564 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 20:14:04.298176   30564 command_runner.go:130] > # "nofile=1024:2048"
	I0626 20:14:04.298190   30564 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 20:14:04.298200   30564 command_runner.go:130] > # default_ulimits = [
	I0626 20:14:04.298207   30564 command_runner.go:130] > # ]
	I0626 20:14:04.298218   30564 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 20:14:04.298227   30564 command_runner.go:130] > # no_pivot = false
	I0626 20:14:04.298237   30564 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 20:14:04.298250   30564 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 20:14:04.298261   30564 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 20:14:04.298274   30564 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 20:14:04.298301   30564 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 20:14:04.298316   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:14:04.298326   30564 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0626 20:14:04.298334   30564 command_runner.go:130] > # Cgroup setting for conmon
	I0626 20:14:04.298348   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 20:14:04.298359   30564 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 20:14:04.298372   30564 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 20:14:04.298384   30564 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 20:14:04.298395   30564 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 20:14:04.298402   30564 command_runner.go:130] > conmon_env = [
	I0626 20:14:04.298415   30564 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0626 20:14:04.298423   30564 command_runner.go:130] > ]
	I0626 20:14:04.298433   30564 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 20:14:04.298445   30564 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 20:14:04.298458   30564 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 20:14:04.298468   30564 command_runner.go:130] > # default_env = [
	I0626 20:14:04.298476   30564 command_runner.go:130] > # ]
	I0626 20:14:04.298487   30564 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 20:14:04.298496   30564 command_runner.go:130] > # selinux = false
	I0626 20:14:04.298510   30564 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 20:14:04.298524   30564 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 20:14:04.298537   30564 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 20:14:04.298547   30564 command_runner.go:130] > # seccomp_profile = ""
	I0626 20:14:04.298560   30564 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 20:14:04.298572   30564 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 20:14:04.298583   30564 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 20:14:04.298594   30564 command_runner.go:130] > # which might increase security.
	I0626 20:14:04.298603   30564 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0626 20:14:04.298617   30564 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 20:14:04.298630   30564 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 20:14:04.298670   30564 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 20:14:04.298683   30564 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 20:14:04.298692   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:14:04.298703   30564 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 20:14:04.298713   30564 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 20:14:04.298724   30564 command_runner.go:130] > # the cgroup blockio controller.
	I0626 20:14:04.298735   30564 command_runner.go:130] > # blockio_config_file = ""
	I0626 20:14:04.298749   30564 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 20:14:04.298759   30564 command_runner.go:130] > # irqbalance daemon.
	I0626 20:14:04.298769   30564 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 20:14:04.298780   30564 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 20:14:04.298791   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:14:04.298802   30564 command_runner.go:130] > # rdt_config_file = ""
	I0626 20:14:04.298816   30564 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 20:14:04.298826   30564 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 20:14:04.298838   30564 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 20:14:04.298848   30564 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 20:14:04.298861   30564 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 20:14:04.298874   30564 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 20:14:04.298884   30564 command_runner.go:130] > # will be added.
	I0626 20:14:04.298895   30564 command_runner.go:130] > # default_capabilities = [
	I0626 20:14:04.298902   30564 command_runner.go:130] > # 	"CHOWN",
	I0626 20:14:04.298910   30564 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 20:14:04.298919   30564 command_runner.go:130] > # 	"FSETID",
	I0626 20:14:04.298929   30564 command_runner.go:130] > # 	"FOWNER",
	I0626 20:14:04.298936   30564 command_runner.go:130] > # 	"SETGID",
	I0626 20:14:04.298946   30564 command_runner.go:130] > # 	"SETUID",
	I0626 20:14:04.298953   30564 command_runner.go:130] > # 	"SETPCAP",
	I0626 20:14:04.298961   30564 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 20:14:04.298971   30564 command_runner.go:130] > # 	"KILL",
	I0626 20:14:04.298979   30564 command_runner.go:130] > # ]
	I0626 20:14:04.298991   30564 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 20:14:04.299004   30564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:14:04.299013   30564 command_runner.go:130] > # default_sysctls = [
	I0626 20:14:04.299019   30564 command_runner.go:130] > # ]
	I0626 20:14:04.299030   30564 command_runner.go:130] > # List of devices on the host that a
	I0626 20:14:04.299041   30564 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 20:14:04.299047   30564 command_runner.go:130] > # allowed_devices = [
	I0626 20:14:04.299057   30564 command_runner.go:130] > # 	"/dev/fuse",
	I0626 20:14:04.299065   30564 command_runner.go:130] > # ]
	I0626 20:14:04.299074   30564 command_runner.go:130] > # List of additional devices. specified as
	I0626 20:14:04.299089   30564 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 20:14:04.299100   30564 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 20:14:04.299123   30564 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 20:14:04.299134   30564 command_runner.go:130] > # additional_devices = [
	I0626 20:14:04.299139   30564 command_runner.go:130] > # ]
	I0626 20:14:04.299148   30564 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 20:14:04.299158   30564 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 20:14:04.299165   30564 command_runner.go:130] > # 	"/etc/cdi",
	I0626 20:14:04.299176   30564 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 20:14:04.299185   30564 command_runner.go:130] > # ]
	I0626 20:14:04.299195   30564 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 20:14:04.299207   30564 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 20:14:04.299217   30564 command_runner.go:130] > # Defaults to false.
	I0626 20:14:04.299224   30564 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 20:14:04.299234   30564 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 20:14:04.299242   30564 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 20:14:04.299252   30564 command_runner.go:130] > # hooks_dir = [
	I0626 20:14:04.299260   30564 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 20:14:04.299268   30564 command_runner.go:130] > # ]
	I0626 20:14:04.299278   30564 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 20:14:04.299295   30564 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 20:14:04.299306   30564 command_runner.go:130] > # its default mounts from the following two files:
	I0626 20:14:04.299314   30564 command_runner.go:130] > #
	I0626 20:14:04.299320   30564 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 20:14:04.299332   30564 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 20:14:04.299344   30564 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 20:14:04.299350   30564 command_runner.go:130] > #
	I0626 20:14:04.299363   30564 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 20:14:04.299376   30564 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 20:14:04.299389   30564 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 20:14:04.299402   30564 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 20:14:04.299410   30564 command_runner.go:130] > #
	I0626 20:14:04.299414   30564 command_runner.go:130] > # default_mounts_file = ""
	I0626 20:14:04.299421   30564 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 20:14:04.299435   30564 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 20:14:04.299445   30564 command_runner.go:130] > pids_limit = 1024
	I0626 20:14:04.299456   30564 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 20:14:04.299469   30564 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 20:14:04.299481   30564 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 20:14:04.299496   30564 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 20:14:04.299506   30564 command_runner.go:130] > # log_size_max = -1
	I0626 20:14:04.299514   30564 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 20:14:04.299521   30564 command_runner.go:130] > # log_to_journald = false
	I0626 20:14:04.299531   30564 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 20:14:04.299544   30564 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 20:14:04.299553   30564 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 20:14:04.299585   30564 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 20:14:04.299597   30564 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 20:14:04.299604   30564 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 20:14:04.299613   30564 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 20:14:04.299617   30564 command_runner.go:130] > # read_only = false
	I0626 20:14:04.299628   30564 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 20:14:04.299641   30564 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 20:14:04.299649   30564 command_runner.go:130] > # live configuration reload.
	I0626 20:14:04.299659   30564 command_runner.go:130] > # log_level = "info"
	I0626 20:14:04.299668   30564 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 20:14:04.299679   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:14:04.299689   30564 command_runner.go:130] > # log_filter = ""
	I0626 20:14:04.299701   30564 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 20:14:04.299710   30564 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 20:14:04.299714   30564 command_runner.go:130] > # separated by comma.
	I0626 20:14:04.299721   30564 command_runner.go:130] > # uid_mappings = ""
	I0626 20:14:04.299734   30564 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 20:14:04.299746   30564 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 20:14:04.299756   30564 command_runner.go:130] > # separated by comma.
	I0626 20:14:04.299762   30564 command_runner.go:130] > # gid_mappings = ""
	I0626 20:14:04.299775   30564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 20:14:04.299788   30564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:14:04.299797   30564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:14:04.299802   30564 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 20:14:04.299812   30564 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 20:14:04.299826   30564 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 20:14:04.299836   30564 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 20:14:04.299847   30564 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 20:14:04.299856   30564 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 20:14:04.299869   30564 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 20:14:04.299881   30564 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 20:14:04.299889   30564 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 20:14:04.299895   30564 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 20:14:04.299903   30564 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 20:14:04.299913   30564 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 20:14:04.299925   30564 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 20:14:04.299932   30564 command_runner.go:130] > drop_infra_ctr = false
	I0626 20:14:04.299949   30564 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 20:14:04.299962   30564 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 20:14:04.299975   30564 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 20:14:04.299982   30564 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 20:14:04.299992   30564 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 20:14:04.300000   30564 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 20:14:04.300007   30564 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 20:14:04.300023   30564 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 20:14:04.300034   30564 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0626 20:14:04.300047   30564 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 20:14:04.300059   30564 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 20:14:04.300072   30564 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 20:14:04.300081   30564 command_runner.go:130] > # default_runtime = "runc"
	I0626 20:14:04.300089   30564 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 20:14:04.300097   30564 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 20:14:04.300116   30564 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 20:14:04.300127   30564 command_runner.go:130] > # creation as a file is not desired either.
	I0626 20:14:04.300141   30564 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 20:14:04.300152   30564 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 20:14:04.300161   30564 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 20:14:04.300170   30564 command_runner.go:130] > # ]
	I0626 20:14:04.300181   30564 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 20:14:04.300195   30564 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 20:14:04.300209   30564 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 20:14:04.300222   30564 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 20:14:04.300230   30564 command_runner.go:130] > #
	I0626 20:14:04.300242   30564 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 20:14:04.300251   30564 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 20:14:04.300261   30564 command_runner.go:130] > #  runtime_type = "oci"
	I0626 20:14:04.300272   30564 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 20:14:04.300285   30564 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 20:14:04.300296   30564 command_runner.go:130] > #  allowed_annotations = []
	I0626 20:14:04.300305   30564 command_runner.go:130] > # Where:
	I0626 20:14:04.300316   30564 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 20:14:04.300329   30564 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 20:14:04.300342   30564 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 20:14:04.300355   30564 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 20:14:04.300384   30564 command_runner.go:130] > #   in $PATH.
	I0626 20:14:04.300397   30564 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 20:14:04.300408   30564 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 20:14:04.300422   30564 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 20:14:04.300428   30564 command_runner.go:130] > #   state.
	I0626 20:14:04.300443   30564 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 20:14:04.300456   30564 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 20:14:04.300470   30564 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 20:14:04.300482   30564 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 20:14:04.300496   30564 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 20:14:04.300509   30564 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 20:14:04.300520   30564 command_runner.go:130] > #   The currently recognized values are:
	I0626 20:14:04.300534   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 20:14:04.300549   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 20:14:04.300562   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 20:14:04.300575   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 20:14:04.300591   30564 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 20:14:04.300604   30564 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 20:14:04.300618   30564 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 20:14:04.300632   30564 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 20:14:04.300643   30564 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 20:14:04.300653   30564 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 20:14:04.300662   30564 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0626 20:14:04.300672   30564 command_runner.go:130] > runtime_type = "oci"
	I0626 20:14:04.300683   30564 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 20:14:04.300692   30564 command_runner.go:130] > runtime_config_path = ""
	I0626 20:14:04.300699   30564 command_runner.go:130] > monitor_path = ""
	I0626 20:14:04.300709   30564 command_runner.go:130] > monitor_cgroup = ""
	I0626 20:14:04.300717   30564 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 20:14:04.300731   30564 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 20:14:04.300742   30564 command_runner.go:130] > # running containers
	I0626 20:14:04.300752   30564 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 20:14:04.300768   30564 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 20:14:04.300804   30564 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 20:14:04.300817   30564 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 20:14:04.300829   30564 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 20:14:04.300840   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 20:14:04.300851   30564 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 20:14:04.300860   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 20:14:04.300868   30564 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 20:14:04.300875   30564 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 20:14:04.300889   30564 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 20:14:04.300901   30564 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 20:14:04.300915   30564 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 20:14:04.300931   30564 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 20:14:04.300947   30564 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 20:14:04.300959   30564 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 20:14:04.300974   30564 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 20:14:04.300991   30564 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 20:14:04.301005   30564 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 20:14:04.301019   30564 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 20:14:04.301028   30564 command_runner.go:130] > # Example:
	I0626 20:14:04.301037   30564 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 20:14:04.301048   30564 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 20:14:04.301057   30564 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 20:14:04.301073   30564 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 20:14:04.301082   30564 command_runner.go:130] > # cpuset = 0
	I0626 20:14:04.301091   30564 command_runner.go:130] > # cpushares = "0-1"
	I0626 20:14:04.301099   30564 command_runner.go:130] > # Where:
	I0626 20:14:04.301108   30564 command_runner.go:130] > # The workload name is workload-type.
	I0626 20:14:04.301123   30564 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 20:14:04.301134   30564 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 20:14:04.301144   30564 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 20:14:04.301162   30564 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 20:14:04.301176   30564 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 20:14:04.301184   30564 command_runner.go:130] > # 
	I0626 20:14:04.301196   30564 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 20:14:04.301203   30564 command_runner.go:130] > #
	I0626 20:14:04.301215   30564 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 20:14:04.301248   30564 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 20:14:04.301261   30564 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 20:14:04.301272   30564 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 20:14:04.301290   30564 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 20:14:04.301299   30564 command_runner.go:130] > [crio.image]
	I0626 20:14:04.301313   30564 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 20:14:04.301323   30564 command_runner.go:130] > # default_transport = "docker://"
	I0626 20:14:04.301335   30564 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 20:14:04.301348   30564 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:14:04.301357   30564 command_runner.go:130] > # global_auth_file = ""
	I0626 20:14:04.301367   30564 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 20:14:04.301386   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:14:04.301396   30564 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 20:14:04.301410   30564 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 20:14:04.301423   30564 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 20:14:04.301435   30564 command_runner.go:130] > # This option supports live configuration reload.
	I0626 20:14:04.301445   30564 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 20:14:04.301458   30564 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 20:14:04.301469   30564 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 20:14:04.301482   30564 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 20:14:04.301495   30564 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 20:14:04.301506   30564 command_runner.go:130] > # pause_command = "/pause"
	I0626 20:14:04.301517   30564 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 20:14:04.301530   30564 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 20:14:04.301544   30564 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 20:14:04.301557   30564 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 20:14:04.301569   30564 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 20:14:04.301579   30564 command_runner.go:130] > # signature_policy = ""
	I0626 20:14:04.301590   30564 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 20:14:04.301603   30564 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 20:14:04.301613   30564 command_runner.go:130] > # changing them here.
	I0626 20:14:04.301623   30564 command_runner.go:130] > # insecure_registries = [
	I0626 20:14:04.301631   30564 command_runner.go:130] > # ]
	I0626 20:14:04.301643   30564 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 20:14:04.301659   30564 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 20:14:04.301671   30564 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 20:14:04.301683   30564 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 20:14:04.301694   30564 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 20:14:04.301707   30564 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 20:14:04.301714   30564 command_runner.go:130] > # CNI plugins.
	I0626 20:14:04.301724   30564 command_runner.go:130] > [crio.network]
	I0626 20:14:04.301737   30564 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 20:14:04.301749   30564 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 20:14:04.301759   30564 command_runner.go:130] > # cni_default_network = ""
	I0626 20:14:04.301770   30564 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 20:14:04.301781   30564 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 20:14:04.301793   30564 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 20:14:04.301802   30564 command_runner.go:130] > # plugin_dirs = [
	I0626 20:14:04.301809   30564 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 20:14:04.301818   30564 command_runner.go:130] > # ]
	I0626 20:14:04.301828   30564 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 20:14:04.301838   30564 command_runner.go:130] > [crio.metrics]
	I0626 20:14:04.301848   30564 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 20:14:04.301858   30564 command_runner.go:130] > enable_metrics = true
	I0626 20:14:04.301870   30564 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 20:14:04.301881   30564 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 20:14:04.301894   30564 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 20:14:04.301906   30564 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 20:14:04.301919   30564 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 20:14:04.301929   30564 command_runner.go:130] > # metrics_collectors = [
	I0626 20:14:04.301936   30564 command_runner.go:130] > # 	"operations",
	I0626 20:14:04.301948   30564 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 20:14:04.301959   30564 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 20:14:04.301969   30564 command_runner.go:130] > # 	"operations_errors",
	I0626 20:14:04.301978   30564 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 20:14:04.301987   30564 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 20:14:04.301995   30564 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 20:14:04.302005   30564 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 20:14:04.302014   30564 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 20:14:04.302024   30564 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 20:14:04.302032   30564 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 20:14:04.302056   30564 command_runner.go:130] > # 	"containers_oom_total",
	I0626 20:14:04.302066   30564 command_runner.go:130] > # 	"containers_oom",
	I0626 20:14:04.302077   30564 command_runner.go:130] > # 	"processes_defunct",
	I0626 20:14:04.302096   30564 command_runner.go:130] > # 	"operations_total",
	I0626 20:14:04.302103   30564 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 20:14:04.302120   30564 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 20:14:04.302131   30564 command_runner.go:130] > # 	"operations_errors_total",
	I0626 20:14:04.302141   30564 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 20:14:04.302150   30564 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 20:14:04.302161   30564 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 20:14:04.302172   30564 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 20:14:04.302182   30564 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 20:14:04.302190   30564 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 20:14:04.302199   30564 command_runner.go:130] > # ]
	I0626 20:14:04.302208   30564 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 20:14:04.302218   30564 command_runner.go:130] > # metrics_port = 9090
	I0626 20:14:04.302230   30564 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 20:14:04.302240   30564 command_runner.go:130] > # metrics_socket = ""
	I0626 20:14:04.302249   30564 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 20:14:04.302262   30564 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 20:14:04.302279   30564 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 20:14:04.302296   30564 command_runner.go:130] > # certificate on any modification event.
	I0626 20:14:04.302306   30564 command_runner.go:130] > # metrics_cert = ""
	I0626 20:14:04.302318   30564 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 20:14:04.302329   30564 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 20:14:04.302339   30564 command_runner.go:130] > # metrics_key = ""
	I0626 20:14:04.302350   30564 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 20:14:04.302359   30564 command_runner.go:130] > [crio.tracing]
	I0626 20:14:04.302369   30564 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 20:14:04.302379   30564 command_runner.go:130] > # enable_tracing = false
	I0626 20:14:04.302389   30564 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 20:14:04.302400   30564 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 20:14:04.302411   30564 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 20:14:04.302422   30564 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 20:14:04.302435   30564 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 20:14:04.302444   30564 command_runner.go:130] > [crio.stats]
	I0626 20:14:04.302456   30564 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 20:14:04.302469   30564 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 20:14:04.302480   30564 command_runner.go:130] > # stats_collection_period = 0
	I0626 20:14:04.302529   30564 command_runner.go:130] ! time="2023-06-26 20:14:04.289622215Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0626 20:14:04.302548   30564 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 20:14:04.302617   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:14:04.302628   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:14:04.302638   30564 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:14:04.302662   30564 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-050558 NodeName:multinode-050558-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:14:04.302797   30564 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-050558-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:14:04.302854   30564 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-050558-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:14:04.302912   30564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:14:04.312905   30564 command_runner.go:130] > kubeadm
	I0626 20:14:04.312922   30564 command_runner.go:130] > kubectl
	I0626 20:14:04.312926   30564 command_runner.go:130] > kubelet
	I0626 20:14:04.312943   30564 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:14:04.312985   30564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0626 20:14:04.322232   30564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0626 20:14:04.338142   30564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:14:04.354007   30564 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0626 20:14:04.357994   30564 command_runner.go:130] > 192.168.39.229	control-plane.minikube.internal
	I0626 20:14:04.358135   30564 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:14:04.358479   30564 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:14:04.358646   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:14:04.358696   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:14:04.373117   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0626 20:14:04.373480   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:14:04.373906   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:14:04.373929   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:14:04.374204   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:14:04.374390   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:14:04.374526   30564 start.go:301] JoinCluster: &{Name:multinode-050558 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:multinode-050558 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.133 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:14:04.374646   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0626 20:14:04.374659   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:14:04.377369   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:14:04.377849   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:14:04.377885   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:14:04.377970   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:14:04.378123   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:14:04.378262   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:14:04.378404   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:14:04.569394   30564 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 6ijxiw.22lhc15inh96bt0z --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:14:04.569434   30564 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0626 20:14:04.569462   30564 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:14:04.569812   30564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:14:04.569848   30564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:14:04.584144   30564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0626 20:14:04.584499   30564 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:14:04.584913   30564 main.go:141] libmachine: Using API Version  1
	I0626 20:14:04.584933   30564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:14:04.585212   30564 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:14:04.585436   30564 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:14:04.585618   30564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-050558-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0626 20:14:04.585640   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:14:04.588475   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:14:04.588954   30564 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:10:01 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:14:04.588978   30564 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:14:04.589099   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:14:04.589269   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:14:04.589435   30564 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:14:04.589611   30564 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:14:04.793702   30564 command_runner.go:130] > node/multinode-050558-m03 cordoned
	I0626 20:14:07.832530   30564 command_runner.go:130] > pod "busybox-67b7f59bb-b5z7t" has DeletionTimestamp older than 1 seconds, skipping
	I0626 20:14:07.832559   30564 command_runner.go:130] > node/multinode-050558-m03 drained
	I0626 20:14:07.834441   30564 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0626 20:14:07.834466   30564 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-9tprm, kube-system/kube-proxy-57pwt
	I0626 20:14:07.834489   30564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-050558-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.248851421s)
	I0626 20:14:07.834505   30564 node.go:108] successfully drained node "m03"
	I0626 20:14:07.834921   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:14:07.835197   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:14:07.835473   30564 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0626 20:14:07.835521   30564 round_trippers.go:463] DELETE https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:14:07.835529   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:07.835538   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:07.835544   30564 round_trippers.go:473]     Content-Type: application/json
	I0626 20:14:07.835552   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:07.848714   30564 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0626 20:14:07.848736   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:07.848747   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:07.848756   30564 round_trippers.go:580]     Content-Length: 171
	I0626 20:14:07.848763   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:07 GMT
	I0626 20:14:07.848772   30564 round_trippers.go:580]     Audit-Id: 50a4e49a-040b-4f81-ac5e-14b018ef6af0
	I0626 20:14:07.848792   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:07.848810   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:07.848818   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:07.849135   30564 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-050558-m03","kind":"nodes","uid":"0d94d9a3-b2d7-4a89-99ad-2d23c494ddb0"}}
	I0626 20:14:07.849196   30564 node.go:124] successfully deleted node "m03"
	I0626 20:14:07.849211   30564 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0626 20:14:07.849232   30564 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0626 20:14:07.849254   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6ijxiw.22lhc15inh96bt0z --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-050558-m03"
	I0626 20:14:07.903101   30564 command_runner.go:130] > [preflight] Running pre-flight checks
	I0626 20:14:08.060242   30564 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0626 20:14:08.060277   30564 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0626 20:14:08.120999   30564 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:14:08.121029   30564 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:14:08.121036   30564 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 20:14:08.245103   30564 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0626 20:14:08.771864   30564 command_runner.go:130] > This node has joined the cluster:
	I0626 20:14:08.771883   30564 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0626 20:14:08.771890   30564 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0626 20:14:08.771898   30564 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0626 20:14:08.774713   30564 command_runner.go:130] ! W0626 20:14:07.897066    2289 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0626 20:14:08.774740   30564 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0626 20:14:08.774749   30564 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0626 20:14:08.774760   30564 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0626 20:14:08.774826   30564 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0626 20:14:09.062010   30564 start.go:303] JoinCluster complete in 4.687476518s
	I0626 20:14:09.062032   30564 cni.go:84] Creating CNI manager for ""
	I0626 20:14:09.062038   30564 cni.go:137] 3 nodes found, recommending kindnet
	I0626 20:14:09.062082   30564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 20:14:09.067573   30564 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 20:14:09.067598   30564 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0626 20:14:09.067611   30564 command_runner.go:130] > Device: 11h/17d	Inode: 3543        Links: 1
	I0626 20:14:09.067621   30564 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 20:14:09.067634   30564 command_runner.go:130] > Access: 2023-06-26 20:10:02.269403478 +0000
	I0626 20:14:09.067646   30564 command_runner.go:130] > Modify: 2023-06-22 22:21:30.000000000 +0000
	I0626 20:14:09.067655   30564 command_runner.go:130] > Change: 2023-06-26 20:10:00.284403478 +0000
	I0626 20:14:09.067662   30564 command_runner.go:130] >  Birth: -
	I0626 20:14:09.067812   30564 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 20:14:09.067832   30564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 20:14:09.088060   30564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 20:14:09.535592   30564 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:14:09.544307   30564 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0626 20:14:09.548714   30564 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0626 20:14:09.566668   30564 command_runner.go:130] > daemonset.apps/kindnet configured
	I0626 20:14:09.570524   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:14:09.570719   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:14:09.570976   30564 round_trippers.go:463] GET https://192.168.39.229:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 20:14:09.570987   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.570995   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.571001   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.574475   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:09.574488   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.574494   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.574500   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.574506   30564 round_trippers.go:580]     Content-Length: 291
	I0626 20:14:09.574511   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.574516   30564 round_trippers.go:580]     Audit-Id: d01d0e7d-c868-474d-8a6f-804bfc74c935
	I0626 20:14:09.574521   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.574527   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.574544   30564 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94c202ca-4f15-4fc0-a8d2-e6d62293ec32","resourceVersion":"861","creationTimestamp":"2023-06-26T20:00:04Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0626 20:14:09.574618   30564 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-050558" context rescaled to 1 replicas
	I0626 20:14:09.574642   30564 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0626 20:14:09.576539   30564 out.go:177] * Verifying Kubernetes components...
	I0626 20:14:09.577843   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:14:09.591536   30564 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:14:09.591786   30564 kapi.go:59] client config for multinode-050558: &rest.Config{Host:"https://192.168.39.229:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/profiles/multinode-050558/client.key", CAFile:"/home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 20:14:09.592075   30564 node_ready.go:35] waiting up to 6m0s for node "multinode-050558-m03" to be "Ready" ...
	I0626 20:14:09.592142   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:14:09.592153   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.592164   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.592177   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.595628   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:09.595651   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.595658   30564 round_trippers.go:580]     Audit-Id: 93d492d4-e6f1-4d69-b977-a449899edad8
	I0626 20:14:09.595664   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.595671   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.595680   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.595692   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.595701   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.595830   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m03","uid":"8bd9a4a7-499e-4663-9c3e-5d23eed23ce7","resourceVersion":"1176","creationTimestamp":"2023-06-26T20:14:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:14:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:14:08Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0626 20:14:09.596095   30564 node_ready.go:49] node "multinode-050558-m03" has status "Ready":"True"
	I0626 20:14:09.596109   30564 node_ready.go:38] duration metric: took 4.017864ms waiting for node "multinode-050558-m03" to be "Ready" ...
	I0626 20:14:09.596119   30564 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:14:09.596174   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods
	I0626 20:14:09.596184   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.596194   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.596204   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.601439   30564 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0626 20:14:09.601456   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.601466   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.601474   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.601482   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.601489   30564 round_trippers.go:580]     Audit-Id: 0dcecaac-9f70-4686-94b9-054d7a371e54
	I0626 20:14:09.601501   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.601511   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.602416   30564 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1183"},"items":[{"metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82090 chars]
	I0626 20:14:09.604751   30564 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.604820   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-5wffn
	I0626 20:14:09.604830   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.604837   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.604843   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.607809   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:09.607825   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.607835   30564 round_trippers.go:580]     Audit-Id: 8a4f5844-dcc4-415c-aaf2-66987a1ede86
	I0626 20:14:09.607843   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.607852   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.607862   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.607871   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.607888   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.608414   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-5wffn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5","resourceVersion":"838","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"dcd0ac65-4e83-4528-a2b6-37f494515be8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcd0ac65-4e83-4528-a2b6-37f494515be8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0626 20:14:09.608812   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:09.608824   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.608831   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.608840   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.611205   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:09.611225   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.611234   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.611243   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.611251   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.611260   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.611269   30564 round_trippers.go:580]     Audit-Id: 1f3e8bff-9658-47a6-81d4-93e64208f63d
	I0626 20:14:09.611291   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.611492   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:14:09.611806   30564 pod_ready.go:92] pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:09.611820   30564 pod_ready.go:81] duration metric: took 7.051227ms waiting for pod "coredns-5d78c9869d-5wffn" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.611828   30564 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.611863   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-050558
	I0626 20:14:09.611871   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.611877   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.611884   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.613919   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:09.613933   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.613940   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.613952   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.613971   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.613980   30564 round_trippers.go:580]     Audit-Id: 6749582f-d992-47f8-83cd-56d88a05aad9
	I0626 20:14:09.613992   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.614000   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.614207   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-050558","namespace":"kube-system","uid":"457d2420-8ece-4b92-8281-7866fa6a884a","resourceVersion":"832","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.229:2379","kubernetes.io/config.hash":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.mirror":"a51ca9066ce980968640db5826cdbb03","kubernetes.io/config.seen":"2023-06-26T19:59:55.756268397Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0626 20:14:09.614651   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:09.614667   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.614678   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.614692   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.616887   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:09.616908   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.616918   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.616926   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.616934   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.616948   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.616957   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.616968   30564 round_trippers.go:580]     Audit-Id: 92731892-6515-4c11-bfbc-509e5dbafba6
	I0626 20:14:09.617116   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:14:09.617452   30564 pod_ready.go:92] pod "etcd-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:09.617467   30564 pod_ready.go:81] duration metric: took 5.633682ms waiting for pod "etcd-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.617496   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.617547   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-050558
	I0626 20:14:09.617556   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.617567   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.617580   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.619631   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:09.619649   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.619658   30564 round_trippers.go:580]     Audit-Id: 19d54fce-480a-4e5f-b63f-118c4c3a6c2a
	I0626 20:14:09.619666   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.619683   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.619692   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.619703   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.619714   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.619872   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-050558","namespace":"kube-system","uid":"00573436-b505-4be6-a86a-3ba9b74e1ad5","resourceVersion":"864","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.229:8443","kubernetes.io/config.hash":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.mirror":"3bf9120f8ca60da96af0ed761aeff36b","kubernetes.io/config.seen":"2023-06-26T19:59:55.756272769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0626 20:14:09.620320   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:09.620340   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.620351   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.620365   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.622248   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:14:09.622263   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.622271   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.622280   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.622289   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.622299   30564 round_trippers.go:580]     Audit-Id: 1e8ab300-0f6a-4d9c-bff5-35e795aacb76
	I0626 20:14:09.622310   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.622326   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.622494   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:14:09.622848   30564 pod_ready.go:92] pod "kube-apiserver-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:09.622867   30564 pod_ready.go:81] duration metric: took 5.360915ms waiting for pod "kube-apiserver-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.622879   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.622932   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-050558
	I0626 20:14:09.622944   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.622953   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.622966   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.625042   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:09.625055   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.625064   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.625073   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.625082   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.625096   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.625106   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.625119   30564 round_trippers.go:580]     Audit-Id: 19610742-5f74-4c1d-88d1-84b4aad9f9aa
	I0626 20:14:09.625311   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-050558","namespace":"kube-system","uid":"d90eb1a6-03bd-4bdf-b50d-9448cef0b578","resourceVersion":"831","creationTimestamp":"2023-06-26T20:00:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.mirror":"ce8b8fdad19a87f17af5276f1f8a428a","kubernetes.io/config.seen":"2023-06-26T20:00:04.802665770Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0626 20:14:09.625756   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:09.625777   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.625785   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.625807   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.627430   30564 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 20:14:09.627443   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.627452   30564 round_trippers.go:580]     Audit-Id: 07dffec6-5b24-4274-8f33-04833e8f32a5
	I0626 20:14:09.627461   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.627476   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.627489   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.627498   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.627511   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.627652   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:14:09.627908   30564 pod_ready.go:92] pod "kube-controller-manager-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:09.627921   30564 pod_ready.go:81] duration metric: took 5.03008ms waiting for pod "kube-controller-manager-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.627931   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:09.792430   30564 request.go:628] Waited for 164.447561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:14:09.792497   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:14:09.792502   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.792510   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.792516   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.795649   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:09.795673   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.795681   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.795687   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.795692   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.795697   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.795703   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.795708   30564 round_trippers.go:580]     Audit-Id: 9a97b3dc-0d80-4399-b49b-f84756ddca59
	I0626 20:14:09.795810   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-57pwt","generateName":"kube-proxy-","namespace":"kube-system","uid":"4611d3e6-962b-437a-8b38-387719e69da6","resourceVersion":"1180","creationTimestamp":"2023-06-26T20:01:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:01:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0626 20:14:09.992625   30564 request.go:628] Waited for 196.388554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:14:09.992678   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:14:09.992683   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:09.992690   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:09.992696   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:09.995927   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:09.995951   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:09.995962   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:09.995971   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:09 GMT
	I0626 20:14:09.995980   30564 round_trippers.go:580]     Audit-Id: f176ffd0-1eec-4aa5-8af1-b5494ea57346
	I0626 20:14:09.995990   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:09.996002   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:09.996014   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:09.996332   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m03","uid":"8bd9a4a7-499e-4663-9c3e-5d23eed23ce7","resourceVersion":"1176","creationTimestamp":"2023-06-26T20:14:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:14:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:14:08Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0626 20:14:10.497556   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-57pwt
	I0626 20:14:10.497596   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:10.497608   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:10.497617   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:10.501434   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:10.501460   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:10.501478   30564 round_trippers.go:580]     Audit-Id: baca1b91-ec50-40eb-b9a5-787e3c7f3ee8
	I0626 20:14:10.501487   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:10.501495   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:10.501502   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:10.501510   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:10.501518   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:10 GMT
	I0626 20:14:10.501701   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-57pwt","generateName":"kube-proxy-","namespace":"kube-system","uid":"4611d3e6-962b-437a-8b38-387719e69da6","resourceVersion":"1191","creationTimestamp":"2023-06-26T20:01:54Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:01:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0626 20:14:10.502251   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m03
	I0626 20:14:10.502333   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:10.502350   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:10.502371   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:10.504745   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:10.504765   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:10.504774   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:10 GMT
	I0626 20:14:10.504783   30564 round_trippers.go:580]     Audit-Id: a38677cb-b9e2-4629-aa37-3a3092d611a4
	I0626 20:14:10.504791   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:10.504821   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:10.504831   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:10.504837   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:10.504902   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m03","uid":"8bd9a4a7-499e-4663-9c3e-5d23eed23ce7","resourceVersion":"1176","creationTimestamp":"2023-06-26T20:14:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:14:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:14:08Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0626 20:14:10.505176   30564 pod_ready.go:92] pod "kube-proxy-57pwt" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:10.505192   30564 pod_ready.go:81] duration metric: took 877.255184ms waiting for pod "kube-proxy-57pwt" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:10.505201   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:10.592581   30564 request.go:628] Waited for 87.308005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:14:10.592631   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-67x99
	I0626 20:14:10.592636   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:10.592644   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:10.592650   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:10.595985   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:10.596007   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:10.596014   30564 round_trippers.go:580]     Audit-Id: 197a1031-1e92-4ced-a4c3-b87b9d1ee2fa
	I0626 20:14:10.596020   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:10.596026   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:10.596031   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:10.596039   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:10.596048   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:10 GMT
	I0626 20:14:10.596295   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-67x99","generateName":"kube-proxy-","namespace":"kube-system","uid":"7ffa817a-1b4a-41a1-9a56-5c65849dc57e","resourceVersion":"744","creationTimestamp":"2023-06-26T20:00:16Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 20:14:10.793072   30564 request.go:628] Waited for 196.392985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:10.793140   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:10.793145   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:10.793152   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:10.793158   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:10.795984   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:10.796013   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:10.796023   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:10.796030   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:10.796036   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:10.796041   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:10.796047   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:10 GMT
	I0626 20:14:10.796054   30564 round_trippers.go:580]     Audit-Id: 842db9c6-ce70-4f8b-9147-148452b02f8a
	I0626 20:14:10.796753   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:14:10.797095   30564 pod_ready.go:92] pod "kube-proxy-67x99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:10.797110   30564 pod_ready.go:81] duration metric: took 291.904055ms waiting for pod "kube-proxy-67x99" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:10.797120   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:10.992562   30564 request.go:628] Waited for 195.380717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:14:10.992633   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wwg6x
	I0626 20:14:10.992642   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:10.992650   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:10.992657   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:10.995662   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:10.995680   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:10.995686   30564 round_trippers.go:580]     Audit-Id: 5d4d3d2e-0b01-42ec-91c7-155ef2ed268b
	I0626 20:14:10.995692   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:10.995697   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:10.995704   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:10.995713   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:10.995722   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:10 GMT
	I0626 20:14:10.996108   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wwg6x","generateName":"kube-proxy-","namespace":"kube-system","uid":"bdb04dda-dd36-45be-8f0e-7dad2bce1ef0","resourceVersion":"1018","creationTimestamp":"2023-06-26T20:00:59Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32333501-e76c-4837-b478-9a08cb90cbfa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32333501-e76c-4837-b478-9a08cb90cbfa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0626 20:14:11.192892   30564 request.go:628] Waited for 196.375269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:14:11.192981   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558-m02
	I0626 20:14:11.192993   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:11.193005   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:11.193017   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:11.195832   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:11.195851   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:11.195858   30564 round_trippers.go:580]     Audit-Id: 870b23ef-9817-4b73-a3de-d54f28e159c9
	I0626 20:14:11.195863   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:11.195875   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:11.195883   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:11.195892   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:11.195901   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:11 GMT
	I0626 20:14:11.196204   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558-m02","uid":"3b6f3e73-9c2f-495b-9525-5a38ba85fc78","resourceVersion":"1000","creationTimestamp":"2023-06-26T20:12:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:12:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:12:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I0626 20:14:11.196480   30564 pod_ready.go:92] pod "kube-proxy-wwg6x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:11.196502   30564 pod_ready.go:81] duration metric: took 399.373426ms waiting for pod "kube-proxy-wwg6x" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:11.196514   30564 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:11.392954   30564 request.go:628] Waited for 196.370825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:14:11.393009   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-050558
	I0626 20:14:11.393014   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:11.393027   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:11.393036   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:11.395523   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:11.395543   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:11.395550   30564 round_trippers.go:580]     Audit-Id: 0dcfa21c-def3-4907-91b1-423105786fc8
	I0626 20:14:11.395556   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:11.395561   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:11.395567   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:11.395572   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:11.395578   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:11 GMT
	I0626 20:14:11.395745   30564 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-050558","namespace":"kube-system","uid":"1645e687-25f4-49b9-9d11-5f3db01fe7d2","resourceVersion":"848","creationTimestamp":"2023-06-26T20:00:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.mirror":"fb51be42b8f4d7cafa13e10ab353dbbb","kubernetes.io/config.seen":"2023-06-26T19:59:55.756274617Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T20:00:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0626 20:14:11.592519   30564 request.go:628] Waited for 196.334809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:11.592580   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes/multinode-050558
	I0626 20:14:11.592587   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:11.592597   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:11.592606   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:11.595398   30564 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 20:14:11.595415   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:11.595421   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:11.595427   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:11.595433   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:11.595441   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:11.595447   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:11 GMT
	I0626 20:14:11.595454   30564 round_trippers.go:580]     Audit-Id: 6f7f0986-b2e0-4c96-b431-c71b012def3f
	I0626 20:14:11.595577   30564 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T20:00:01Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0626 20:14:11.595975   30564 pod_ready.go:92] pod "kube-scheduler-multinode-050558" in "kube-system" namespace has status "Ready":"True"
	I0626 20:14:11.596061   30564 pod_ready.go:81] duration metric: took 399.533296ms waiting for pod "kube-scheduler-multinode-050558" in "kube-system" namespace to be "Ready" ...
	I0626 20:14:11.596087   30564 pod_ready.go:38] duration metric: took 1.999956613s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:14:11.596111   30564 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:14:11.596168   30564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:14:11.609951   30564 system_svc.go:56] duration metric: took 13.835622ms WaitForService to wait for kubelet.
	I0626 20:14:11.609974   30564 kubeadm.go:581] duration metric: took 2.035310242s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:14:11.609996   30564 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:14:11.792833   30564 request.go:628] Waited for 182.760123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.229:8443/api/v1/nodes
	I0626 20:14:11.792903   30564 round_trippers.go:463] GET https://192.168.39.229:8443/api/v1/nodes
	I0626 20:14:11.792912   30564 round_trippers.go:469] Request Headers:
	I0626 20:14:11.792922   30564 round_trippers.go:473]     Accept: application/json, */*
	I0626 20:14:11.792945   30564 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 20:14:11.796074   30564 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 20:14:11.796100   30564 round_trippers.go:577] Response Headers:
	I0626 20:14:11.796110   30564 round_trippers.go:580]     Audit-Id: 519da316-606c-42ab-a509-f6e3cd96f53c
	I0626 20:14:11.796118   30564 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 20:14:11.796125   30564 round_trippers.go:580]     Content-Type: application/json
	I0626 20:14:11.796133   30564 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3fcb4b02-c3ca-49eb-81d3-22e3eff0efeb
	I0626 20:14:11.796140   30564 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0df1ffe-2ac7-4444-9013-5b360b5189ec
	I0626 20:14:11.796148   30564 round_trippers.go:580]     Date: Mon, 26 Jun 2023 20:14:11 GMT
	I0626 20:14:11.796606   30564 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1195"},"items":[{"metadata":{"name":"multinode-050558","uid":"b85b442d-71e8-4c07-9b4b-851d3231c092","resourceVersion":"875","creationTimestamp":"2023-06-26T20:00:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-050558","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-050558","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T20_00_05_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15135 chars]
	I0626 20:14:11.797226   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:14:11.797247   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:14:11.797256   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:14:11.797260   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:14:11.797263   30564 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:14:11.797266   30564 node_conditions.go:123] node cpu capacity is 2
	I0626 20:14:11.797270   30564 node_conditions.go:105] duration metric: took 187.269665ms to run NodePressure ...
	I0626 20:14:11.797278   30564 start.go:228] waiting for startup goroutines ...
	I0626 20:14:11.797294   30564 start.go:242] writing updated cluster config ...
	I0626 20:14:11.797637   30564 ssh_runner.go:195] Run: rm -f paused
	I0626 20:14:11.844858   30564 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:14:11.847750   30564 out.go:177] * Done! kubectl is now configured to use "multinode-050558" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:10:01 UTC, ends at Mon 2023-06-26 20:14:13 UTC. --
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.199154212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b7063fa8-af98-4938-81de-fbfaf455b511 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.199360498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b7063fa8-af98-4938-81de-fbfaf455b511 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.402899484Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=26f8cac6-8a57-4b6f-b947-50ee0330d8db name=/runtime.v1.RuntimeService/Status
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.402995350Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=26f8cac6-8a57-4b6f-b947-50ee0330d8db name=/runtime.v1.RuntimeService/Status
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.838948255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c9cd83c4-035e-4dc4-bf86-b97e6bfbf7d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.839043479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c9cd83c4-035e-4dc4-bf86-b97e6bfbf7d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.839282757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c9cd83c4-035e-4dc4-bf86-b97e6bfbf7d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.876646003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=20047c7b-2702-4df0-ac06-f14fd5e3faea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.876744773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=20047c7b-2702-4df0-ac06-f14fd5e3faea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.877057287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=20047c7b-2702-4df0-ac06-f14fd5e3faea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.917178180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0b63213a-6d10-42cd-b71b-4624e27e61ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.917273296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0b63213a-6d10-42cd-b71b-4624e27e61ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.917536026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0b63213a-6d10-42cd-b71b-4624e27e61ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.954892410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=543b7b0c-0e61-400f-9f37-07715193f028 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.954966734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=543b7b0c-0e61-400f-9f37-07715193f028 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.956198292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=543b7b0c-0e61-400f-9f37-07715193f028 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.991968427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b21b56f8-deec-42ed-9210-b4d71659b94e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.992060082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b21b56f8-deec-42ed-9210-b4d71659b94e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:12 multinode-050558 crio[709]: time="2023-06-26 20:14:12.992284535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b21b56f8-deec-42ed-9210-b4d71659b94e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:13 multinode-050558 crio[709]: time="2023-06-26 20:14:13.023581416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1284184b-bfaf-4a6c-8f91-9d1a4ef0e463 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:13 multinode-050558 crio[709]: time="2023-06-26 20:14:13.023646142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1284184b-bfaf-4a6c-8f91-9d1a4ef0e463 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:13 multinode-050558 crio[709]: time="2023-06-26 20:14:13.023865296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1284184b-bfaf-4a6c-8f91-9d1a4ef0e463 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:13 multinode-050558 crio[709]: time="2023-06-26 20:14:13.066751859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0135bd0e-aff0-445e-a43d-dcd4618ec7f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:13 multinode-050558 crio[709]: time="2023-06-26 20:14:13.066830839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0135bd0e-aff0-445e-a43d-dcd4618ec7f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 20:14:13 multinode-050558 crio[709]: time="2023-06-26 20:14:13.067023441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ce187062667daf3365b736668d8325f9d765cda8a9ab5681fb44ab31111ec4c5,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687810266555341923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a3222d0020ff3c8e6e1e271a04ae4fd0684174d98212977e1ecf21d8851f327,PodSandboxId:7e5397fb15d11b526e3011d960eab2f86bd0023a47b88fdf797fbe69c4b0a596,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1687810245586013290,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-xw4h2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30f039c-5595-4af7-88c3-f7b1fbb71fef,},Annotations:map[string]string{io.kubernetes.container.hash: a24dd11b,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752,PodSandboxId:ff8988cae9ba74a2dbc630064dd668fc334031833fcc0b32e509e1975e004fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687810242928635218,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-5wffn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5,},Annotations:map[string]string{io.kubernetes.container.hash: cf776cea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4,PodSandboxId:a13b4d6903f8c1706681c621815a763a102c0dffe5d3300eb6ded5a6f62f0c8f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1687810237675105596,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vjpzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 695a59a7-ddfd-4f5f-8084-86279daa17b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2e4dd373,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd,PodSandboxId:23debea431c931dce5f0e27ca4c60248519d28804ca6b13803503a012f011fea,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1687810235263018482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fd433ce1-f37e-4168-930f-a93cd00821cb,},Annotations:map[string]string{io.kubernetes.container.hash: db7a08a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e,PodSandboxId:6d864a2434f185714cec7b66211a16d051d8f4bdb2f434451db9c53e0e419a86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687810235245053679,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-67x99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa817a-1b4a-41a1-9a56-5c65849dc
57e,},Annotations:map[string]string{io.kubernetes.container.hash: 504b8d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984,PodSandboxId:82eab038c3f6385c1b58c48bf49ab31b48b914552cd8a1fc12aa85385da6d3f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687810229009946536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51ca9066ce980968640db5826cdbb03,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: c6ef1c5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7,PodSandboxId:42c251aab60844ee085b5fa20b36a2b252e949aa5b9b732b6768eb87147f1ff9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687810228916039398,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb51be42b8f4d7cafa13e10ab353dbbb,},Annotations:map[string]string{io.kubernetes.container.hash
: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080,PodSandboxId:593a3338796d915251aeb44af76c4c40962fd4598ef44d923121319358a63def,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687810228377358701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8b8fdad19a87f17af5276f1f8a428a,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c,PodSandboxId:5dcfd89302d6443494c3f917dcc2c66da54278850148e21d04cf449947acfef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687810228300024664,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-050558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bf9120f8ca60da96af0ed761aeff36b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: fc09cd2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0135bd0e-aff0-445e-a43d-dcd4618ec7f6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	ce187062667da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   23debea431c93
	2a3222d0020ff       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   7e5397fb15d11
	9dcd3f54c33cf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   ff8988cae9ba7
	d5bdae51de8c1       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   a13b4d6903f8c
	4ffd43663f7d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   23debea431c93
	ebb255942f3d9       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      3 minutes ago       Running             kube-proxy                1                   6d864a2434f18
	6ccdcfd82ca21       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      3 minutes ago       Running             etcd                      1                   82eab038c3f63
	51b6feb7bf718       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      3 minutes ago       Running             kube-scheduler            1                   42c251aab6084
	824bbac52557b       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      3 minutes ago       Running             kube-controller-manager   1                   593a3338796d9
	00103650315b9       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      3 minutes ago       Running             kube-apiserver            1                   5dcfd89302d64
	
	* 
	* ==> coredns [9dcd3f54c33cf91c8eb403f380038d3fddfbf7a62ad57d0e59079dda33ed0752] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47377 - 21728 "HINFO IN 1657183016159047674.4115412323602472575. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013798215s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-050558
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-050558
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=multinode-050558
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_00_05_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:00:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-050558
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 20:14:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 20:11:04 +0000   Mon, 26 Jun 2023 19:59:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 20:11:04 +0000   Mon, 26 Jun 2023 19:59:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 20:11:04 +0000   Mon, 26 Jun 2023 19:59:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 20:11:04 +0000   Mon, 26 Jun 2023 20:10:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-050558
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3ea7387ef9741e297b6451ef059cb66
	  System UUID:                f3ea7387-ef97-41e2-97b6-451ef059cb66
	  Boot ID:                    61ccdc10-afd7-4156-8562-1c173e8d6035
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-xw4h2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5d78c9869d-5wffn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-050558                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-vjpzs                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-050558             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-050558    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-67x99                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-050558             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-050558 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-050558 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-050558 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-050558 event: Registered Node multinode-050558 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-050558 status is now: NodeReady
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m46s)  kubelet          Node multinode-050558 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m46s)  kubelet          Node multinode-050558 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x7 over 3m46s)  kubelet          Node multinode-050558 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m27s                  node-controller  Node multinode-050558 event: Registered Node multinode-050558 in Controller
	
	
	Name:               multinode-050558-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-050558-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:12:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-050558-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 20:14:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 20:12:27 +0000   Mon, 26 Jun 2023 20:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 20:12:27 +0000   Mon, 26 Jun 2023 20:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 20:12:27 +0000   Mon, 26 Jun 2023 20:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 20:12:27 +0000   Mon, 26 Jun 2023 20:12:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    multinode-050558-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6af71eae9f2c494a98e3c7c6d80044ef
	  System UUID:                6af71eae-9f2c-494a-98e3-c7c6d80044ef
	  Boot ID:                    e9f7895a-b9f7-4f9a-9ba7-7e3bdd64ea3c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-4lhxt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-kmcqm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-wwg6x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 103s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-050558-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-050558-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-050558-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-050558-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m49s                  kubelet     Node multinode-050558-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m15s (x2 over 3m15s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       108s                   kubelet     Node multinode-050558-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 106s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet     Node multinode-050558-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet     Node multinode-050558-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet     Node multinode-050558-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet     Node multinode-050558-m02 status is now: NodeReady
	
	
	Name:               multinode-050558-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-050558-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:14:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-050558-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 20:14:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 20:14:08 +0000   Mon, 26 Jun 2023 20:14:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 20:14:08 +0000   Mon, 26 Jun 2023 20:14:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 20:14:08 +0000   Mon, 26 Jun 2023 20:14:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 20:14:08 +0000   Mon, 26 Jun 2023 20:14:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    multinode-050558-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5f117bdd3e14be78fbcba49cb4c68f7
	  System UUID:                a5f117bd-d3e1-4be7-8fbc-ba49cb4c68f7
	  Boot ID:                    363c3469-5a30-4282-a2e9-8412fd28218f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-b5z7t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-9tprm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-57pwt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-050558-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-050558-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-050558-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-050558-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-050558-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-050558-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-050558-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-050558-m03 status is now: NodeReady
	  Normal   NodeNotReady             66s                kubelet     Node multinode-050558-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        36s (x2 over 96s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-050558-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-050558-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-050558-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-050558-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Jun26 20:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070823] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.122762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jun26 20:10] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150862] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.452344] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000055] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.377107] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.118712] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.143894] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.109258] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.219968] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +16.794624] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[ +19.546414] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [6ccdcfd82ca2104bbb731605640f34a6f19e2470e42d52be00e351654a620984] <==
	* {"level":"info","ts":"2023-06-26T20:10:30.639Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-26T20:10:30.639Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-26T20:10:30.640Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 switched to configuration voters=(13286884612305677681)"}
	{"level":"info","ts":"2023-06-26T20:10:30.640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","added-peer-id":"b8647f2870156d71","added-peer-peer-urls":["https://192.168.39.229:2380"]}
	{"level":"info","ts":"2023-06-26T20:10:30.640Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:10:30.640Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:10:30.642Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-26T20:10:30.642Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-26T20:10:30.642Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-26T20:10:30.642Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2023-06-26T20:10:30.642Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 3"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 3"}
	{"level":"info","ts":"2023-06-26T20:10:32.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2023-06-26T20:10:32.028Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:multinode-050558 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-26T20:10:32.029Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:10:32.029Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:10:32.030Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2023-06-26T20:10:32.031Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-26T20:10:32.031Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-26T20:10:32.031Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:14:13 up 4 min,  0 users,  load average: 0.23, 0.21, 0.10
	Linux multinode-050558 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [d5bdae51de8c1ebbc0368c1b67190dfc799a9e87253c785489d4b17a8a3b9cd4] <==
	* I0626 20:13:39.292573       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:13:39.292753       1 main.go:227] handling current node
	I0626 20:13:39.292783       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I0626 20:13:39.292804       1 main.go:250] Node multinode-050558-m02 has CIDR [10.244.1.0/24] 
	I0626 20:13:39.292940       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0626 20:13:39.292968       1 main.go:250] Node multinode-050558-m03 has CIDR [10.244.3.0/24] 
	I0626 20:13:49.306195       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:13:49.306334       1 main.go:227] handling current node
	I0626 20:13:49.306376       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I0626 20:13:49.306396       1 main.go:250] Node multinode-050558-m02 has CIDR [10.244.1.0/24] 
	I0626 20:13:49.306603       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0626 20:13:49.306630       1 main.go:250] Node multinode-050558-m03 has CIDR [10.244.3.0/24] 
	I0626 20:13:59.316090       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:13:59.316357       1 main.go:227] handling current node
	I0626 20:13:59.316402       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I0626 20:13:59.316508       1 main.go:250] Node multinode-050558-m02 has CIDR [10.244.1.0/24] 
	I0626 20:13:59.316671       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0626 20:13:59.316705       1 main.go:250] Node multinode-050558-m03 has CIDR [10.244.3.0/24] 
	I0626 20:14:09.333651       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0626 20:14:09.333754       1 main.go:227] handling current node
	I0626 20:14:09.333784       1 main.go:223] Handling node with IPs: map[192.168.39.133:{}]
	I0626 20:14:09.333802       1 main.go:250] Node multinode-050558-m02 has CIDR [10.244.1.0/24] 
	I0626 20:14:09.334191       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0626 20:14:09.334233       1 main.go:250] Node multinode-050558-m03 has CIDR [10.244.2.0/24] 
	I0626 20:14:09.334299       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.231 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [00103650315b9651902e41157c610977ede48bb97dd32ab3a9220655b9c60f1c] <==
	* I0626 20:10:33.559382       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0626 20:10:33.607256       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0626 20:10:33.607338       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0626 20:10:33.707991       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0626 20:10:33.708754       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0626 20:10:33.710958       1 aggregator.go:152] initial CRD sync complete...
	I0626 20:10:33.711036       1 autoregister_controller.go:141] Starting autoregister controller
	I0626 20:10:33.711064       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0626 20:10:33.711091       1 cache.go:39] Caches are synced for autoregister controller
	I0626 20:10:33.754953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0626 20:10:33.757651       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0626 20:10:33.755337       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0626 20:10:33.755688       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0626 20:10:33.767959       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0626 20:10:33.755711       1 shared_informer.go:318] Caches are synced for configmaps
	I0626 20:10:33.779896       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0626 20:10:34.274539       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0626 20:10:34.565376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0626 20:10:36.598503       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0626 20:10:36.736184       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0626 20:10:36.746935       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0626 20:10:36.826743       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0626 20:10:36.835974       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0626 20:10:46.485814       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0626 20:10:46.487239       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [824bbac52557b8f052e953982992ddd533213d19dc57a9fde95f9ac41b9ab080] <==
	* I0626 20:10:46.496586       1 shared_informer.go:318] Caches are synced for disruption
	I0626 20:10:46.502023       1 shared_informer.go:318] Caches are synced for attach detach
	I0626 20:10:46.528873       1 shared_informer.go:318] Caches are synced for resource quota
	I0626 20:10:46.566821       1 shared_informer.go:318] Caches are synced for daemon sets
	I0626 20:10:46.571475       1 shared_informer.go:318] Caches are synced for stateful set
	I0626 20:10:46.607899       1 shared_informer.go:318] Caches are synced for resource quota
	I0626 20:10:46.948540       1 shared_informer.go:318] Caches are synced for garbage collector
	I0626 20:10:46.975967       1 shared_informer.go:318] Caches are synced for garbage collector
	I0626 20:10:46.976059       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	W0626 20:11:24.654152       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m03 node
	I0626 20:12:24.209337       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-b5z7t"
	W0626 20:12:27.223312       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m03 node
	I0626 20:12:27.932526       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-050558-m02\" does not exist"
	W0626 20:12:27.933231       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m03 node
	I0626 20:12:27.934246       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-z697w" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-z697w"
	I0626 20:12:27.957952       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-050558-m02" podCIDRs=[10.244.1.0/24]
	W0626 20:12:28.010264       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m02 node
	W0626 20:13:07.257834       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m02 node
	I0626 20:14:04.833294       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-4lhxt"
	W0626 20:14:07.845938       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m02 node
	I0626 20:14:08.454930       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-050558-m03\" does not exist"
	W0626 20:14:08.456366       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m02 node
	I0626 20:14:08.458551       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-b5z7t" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-b5z7t"
	I0626 20:14:08.479130       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-050558-m03" podCIDRs=[10.244.2.0/24]
	W0626 20:14:08.501469       1 topologycache.go:232] Can't get CPU or zone information for multinode-050558-m02 node
	
	* 
	* ==> kube-proxy [ebb255942f3d96652447cd76c2f5fc1d987ef7b72f6d7aec6d46f8d462b6936e] <==
	* I0626 20:10:35.912041       1 node.go:141] Successfully retrieved node IP: 192.168.39.229
	I0626 20:10:35.912220       1 server_others.go:110] "Detected node IP" address="192.168.39.229"
	I0626 20:10:35.912290       1 server_others.go:554] "Using iptables proxy"
	I0626 20:10:35.994902       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:10:35.994973       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:10:35.995015       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:10:35.995873       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:10:35.995929       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:10:35.996609       1 config.go:188] "Starting service config controller"
	I0626 20:10:35.996664       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:10:35.996800       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:10:35.996937       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:10:35.997387       1 config.go:315] "Starting node config controller"
	I0626 20:10:35.999049       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:10:36.167684       1 shared_informer.go:318] Caches are synced for node config
	I0626 20:10:36.167938       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:10:36.168079       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [51b6feb7bf718ded30010305e986cc3aa3ada8282f46cd69dd8fc676769701b7] <==
	* I0626 20:10:30.927685       1 serving.go:348] Generated self-signed cert in-memory
	W0626 20:10:33.639833       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0626 20:10:33.639917       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 20:10:33.639946       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0626 20:10:33.639971       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0626 20:10:33.715300       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0626 20:10:33.715360       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:10:33.717273       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0626 20:10:33.717957       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0626 20:10:33.718007       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0626 20:10:33.718032       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0626 20:10:33.818807       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:10:01 UTC, ends at Mon 2023-06-26 20:14:13 UTC. --
	Jun 26 20:10:36 multinode-050558 kubelet[913]: E0626 20:10:36.055628     913 projected.go:198] Error preparing data for projected volume kube-api-access-88pnc for pod default/busybox-67b7f59bb-xw4h2: object "default"/"kube-root-ca.crt" not registered
	Jun 26 20:10:36 multinode-050558 kubelet[913]: E0626 20:10:36.055676     913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e30f039c-5595-4af7-88c3-f7b1fbb71fef-kube-api-access-88pnc podName:e30f039c-5595-4af7-88c3-f7b1fbb71fef nodeName:}" failed. No retries permitted until 2023-06-26 20:10:38.055661527 +0000 UTC m=+11.008711477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-88pnc" (UniqueName: "kubernetes.io/projected/e30f039c-5595-4af7-88c3-f7b1fbb71fef-kube-api-access-88pnc") pod "busybox-67b7f59bb-xw4h2" (UID: "e30f039c-5595-4af7-88c3-f7b1fbb71fef") : object "default"/"kube-root-ca.crt" not registered
	Jun 26 20:10:36 multinode-050558 kubelet[913]: E0626 20:10:36.352261     913 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-xw4h2" podUID=e30f039c-5595-4af7-88c3-f7b1fbb71fef
	Jun 26 20:10:36 multinode-050558 kubelet[913]: E0626 20:10:36.352508     913 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-5wffn" podUID=c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5
	Jun 26 20:10:37 multinode-050558 kubelet[913]: E0626 20:10:37.971101     913 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 26 20:10:37 multinode-050558 kubelet[913]: E0626 20:10:37.971191     913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5-config-volume podName:c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5 nodeName:}" failed. No retries permitted until 2023-06-26 20:10:41.971165208 +0000 UTC m=+14.924215170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5-config-volume") pod "coredns-5d78c9869d-5wffn" (UID: "c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5") : object "kube-system"/"coredns" not registered
	Jun 26 20:10:38 multinode-050558 kubelet[913]: E0626 20:10:38.072146     913 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jun 26 20:10:38 multinode-050558 kubelet[913]: E0626 20:10:38.072208     913 projected.go:198] Error preparing data for projected volume kube-api-access-88pnc for pod default/busybox-67b7f59bb-xw4h2: object "default"/"kube-root-ca.crt" not registered
	Jun 26 20:10:38 multinode-050558 kubelet[913]: E0626 20:10:38.072265     913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e30f039c-5595-4af7-88c3-f7b1fbb71fef-kube-api-access-88pnc podName:e30f039c-5595-4af7-88c3-f7b1fbb71fef nodeName:}" failed. No retries permitted until 2023-06-26 20:10:42.072246707 +0000 UTC m=+15.025296668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-88pnc" (UniqueName: "kubernetes.io/projected/e30f039c-5595-4af7-88c3-f7b1fbb71fef-kube-api-access-88pnc") pod "busybox-67b7f59bb-xw4h2" (UID: "e30f039c-5595-4af7-88c3-f7b1fbb71fef") : object "default"/"kube-root-ca.crt" not registered
	Jun 26 20:10:38 multinode-050558 kubelet[913]: E0626 20:10:38.352324     913 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-xw4h2" podUID=e30f039c-5595-4af7-88c3-f7b1fbb71fef
	Jun 26 20:10:38 multinode-050558 kubelet[913]: E0626 20:10:38.352894     913 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-5wffn" podUID=c89172e9-c2e0-4ded-8e6a-c4577ddb8dd5
	Jun 26 20:10:39 multinode-050558 kubelet[913]: I0626 20:10:39.431352     913 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jun 26 20:11:06 multinode-050558 kubelet[913]: I0626 20:11:06.531631     913 scope.go:115] "RemoveContainer" containerID="4ffd43663f7d2290a7d0d4540d56d749557e86ad69c0e35fa09ebfdec81d10cd"
	Jun 26 20:11:27 multinode-050558 kubelet[913]: E0626 20:11:27.370219     913 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 20:11:27 multinode-050558 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 20:11:27 multinode-050558 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 20:11:27 multinode-050558 kubelet[913]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 20:12:27 multinode-050558 kubelet[913]: E0626 20:12:27.374691     913 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 20:12:27 multinode-050558 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 20:12:27 multinode-050558 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 20:12:27 multinode-050558 kubelet[913]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 20:13:27 multinode-050558 kubelet[913]: E0626 20:13:27.372188     913 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 20:13:27 multinode-050558 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 20:13:27 multinode-050558 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 20:13:27 multinode-050558 kubelet[913]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-050558 -n multinode-050558
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-050558 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (684.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-050558 stop: exit status 82 (2m1.765254274s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-050558"  ...
	* Stopping node "multinode-050558"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-050558 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-050558 status: exit status 3 (18.741080213s)

                                                
                                                
-- stdout --
	multinode-050558
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-050558-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:16:36.753823   32818 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host
	E0626 20:16:36.753870   32818 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-050558 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-050558 -n multinode-050558
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-050558 -n multinode-050558: exit status 3 (3.158615915s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:16:40.081694   32911 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host
	E0626 20:16:40.081718   32911 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-050558" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.67s)

                                                
                                    
x
+
TestPreload (281.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-788359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0626 20:26:33.751711   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:26:48.327562   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-788359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m17.594613012s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-788359 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-788359 image pull gcr.io/k8s-minikube/busybox: (2.631628764s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-788359
E0626 20:28:30.705157   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:29:00.824081   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-788359: exit status 82 (2m1.622801812s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-788359"  ...
	* Stopping node "test-preload-788359"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-788359 failed: exit status 82
panic.go:522: *** TestPreload FAILED at 2023-06-26 20:29:17.10418763 +0000 UTC m=+3221.604215456
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-788359 -n test-preload-788359
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-788359 -n test-preload-788359: exit status 3 (18.659966827s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:29:35.761710   36363 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0626 20:29:35.761734   36363 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-788359" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-788359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-788359
--- FAIL: TestPreload (281.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (157.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.3293028298.exe start -p running-upgrade-149180 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0626 20:36:48.326722   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.3293028298.exe start -p running-upgrade-149180 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.553313979s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-149180 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-149180 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (30.574994583s)

                                                
                                                
-- stdout --
	* [running-upgrade-149180] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-149180 in cluster running-upgrade-149180
	* Updating the running kvm2 "running-upgrade-149180" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:38:49.894571   44794 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:38:49.894673   44794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:38:49.894677   44794 out.go:309] Setting ErrFile to fd 2...
	I0626 20:38:49.894681   44794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:38:49.894784   44794 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:38:49.895359   44794 out.go:303] Setting JSON to false
	I0626 20:38:49.896502   44794 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4877,"bootTime":1687807053,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:38:49.896577   44794 start.go:137] virtualization: kvm guest
	I0626 20:38:49.899310   44794 out.go:177] * [running-upgrade-149180] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:38:49.900894   44794 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:38:49.902428   44794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:38:49.900920   44794 notify.go:220] Checking for updates...
	I0626 20:38:49.905432   44794 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:38:49.907094   44794 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:38:49.908709   44794 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:38:49.910388   44794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:38:49.912233   44794 config.go:182] Loaded profile config "running-upgrade-149180": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0626 20:38:49.912249   44794 start_flags.go:683] config upgrade: Driver=kvm2
	I0626 20:38:49.912257   44794 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 20:38:49.912372   44794 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/running-upgrade-149180/config.json ...
	I0626 20:38:49.912913   44794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:38:49.912952   44794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:38:49.928987   44794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
	I0626 20:38:49.929508   44794 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:38:49.930074   44794 main.go:141] libmachine: Using API Version  1
	I0626 20:38:49.930099   44794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:38:49.930436   44794 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:38:49.930600   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:38:49.932839   44794 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0626 20:38:49.934390   44794 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:38:49.934696   44794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:38:49.934740   44794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:38:49.950066   44794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0626 20:38:49.950519   44794 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:38:49.951102   44794 main.go:141] libmachine: Using API Version  1
	I0626 20:38:49.951132   44794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:38:49.951480   44794 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:38:49.951692   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:38:49.993789   44794 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:38:49.995301   44794 start.go:297] selected driver: kvm2
	I0626 20:38:49.995316   44794 start.go:954] validating driver "kvm2" against &{Name:running-upgrade-149180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.177 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0}
	I0626 20:38:49.995414   44794 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:38:49.996055   44794 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:49.996146   44794 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:38:50.010840   44794 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:38:50.011224   44794 cni.go:84] Creating CNI manager for ""
	I0626 20:38:50.011248   44794 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0626 20:38:50.011256   44794 start_flags.go:319] config:
	{Name:running-upgrade-149180 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.177 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:38:50.011446   44794 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.013464   44794 out.go:177] * Starting control plane node running-upgrade-149180 in cluster running-upgrade-149180
	I0626 20:38:50.015025   44794 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0626 20:38:50.479497   44794 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0626 20:38:50.479646   44794 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/running-upgrade-149180/config.json ...
	I0626 20:38:50.479792   44794 cache.go:107] acquiring lock: {Name:mk8d1332847006819a7642bceadcaa87888dbfdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.479886   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0626 20:38:50.479896   44794 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.247µs
	I0626 20:38:50.479920   44794 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0626 20:38:50.479936   44794 cache.go:107] acquiring lock: {Name:mk3d63df4a91b5f9d18276b19e52ae460a6a3874 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.479957   44794 start.go:365] acquiring machines lock for running-upgrade-149180: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:38:50.479974   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0626 20:38:50.479981   44794 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 49.222µs
	I0626 20:38:50.479995   44794 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0626 20:38:50.480007   44794 cache.go:107] acquiring lock: {Name:mk6fd4a6598399dd501c4c3fc9ce705962aee7bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.480046   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0626 20:38:50.480062   44794 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 51.732µs
	I0626 20:38:50.480071   44794 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0626 20:38:50.480070   44794 cache.go:107] acquiring lock: {Name:mk5b502582092d23fe1bf8c1351df859355c2e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.480085   44794 cache.go:107] acquiring lock: {Name:mk50eb46530de295c5d8822f2de0f23681e0187a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.480108   44794 cache.go:107] acquiring lock: {Name:mkd6117b14b81be5748162c26807ea4beb565974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.480105   44794 cache.go:107] acquiring lock: {Name:mkbd9bf2ab822f4747a94c056403dc94c1288741 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.480131   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0626 20:38:50.480146   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0626 20:38:50.480158   44794 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 51.588µs
	I0626 20:38:50.480170   44794 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0626 20:38:50.480144   44794 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 57.469µs
	I0626 20:38:50.480164   44794 cache.go:107] acquiring lock: {Name:mk5aef23d3ca13038e1832621821d0f29a463f87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:50.480179   44794 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0626 20:38:50.480184   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0626 20:38:50.480199   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0626 20:38:50.480199   44794 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 111.581µs
	I0626 20:38:50.480209   44794 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 146.473µs
	I0626 20:38:50.480213   44794 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0626 20:38:50.480215   44794 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0626 20:38:50.480217   44794 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0626 20:38:50.480226   44794 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 66.252µs
	I0626 20:38:50.480234   44794 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0626 20:38:50.480241   44794 cache.go:87] Successfully saved all images to host disk.
	I0626 20:39:15.798299   44794 start.go:369] acquired machines lock for "running-upgrade-149180" in 25.318285215s
	I0626 20:39:15.798350   44794 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:39:15.798364   44794 fix.go:54] fixHost starting: minikube
	I0626 20:39:15.798814   44794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:39:15.798950   44794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:39:15.816830   44794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0626 20:39:15.817204   44794 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:39:15.817711   44794 main.go:141] libmachine: Using API Version  1
	I0626 20:39:15.817735   44794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:39:15.818081   44794 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:39:15.818259   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:15.818421   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetState
	I0626 20:39:15.819853   44794 fix.go:102] recreateIfNeeded on running-upgrade-149180: state=Running err=<nil>
	W0626 20:39:15.819882   44794 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:39:15.821741   44794 out.go:177] * Updating the running kvm2 "running-upgrade-149180" VM ...
	I0626 20:39:15.823247   44794 machine.go:88] provisioning docker machine ...
	I0626 20:39:15.823266   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:15.823477   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetMachineName
	I0626 20:39:15.823605   44794 buildroot.go:166] provisioning hostname "running-upgrade-149180"
	I0626 20:39:15.823623   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetMachineName
	I0626 20:39:15.823741   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:15.826107   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:15.826487   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:15.826515   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:15.826651   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:15.826833   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:15.826971   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:15.827114   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:15.827257   44794 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:15.827699   44794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I0626 20:39:15.827716   44794 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-149180 && echo "running-upgrade-149180" | sudo tee /etc/hostname
	I0626 20:39:15.981836   44794 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-149180
	
	I0626 20:39:15.981869   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:15.984728   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:15.985128   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:15.985162   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:15.985441   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:15.985624   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:15.985780   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:15.985977   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:15.986162   44794 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:15.986783   44794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I0626 20:39:15.986811   44794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-149180' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-149180/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-149180' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:39:16.122059   44794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:39:16.122077   44794 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:39:16.122093   44794 buildroot.go:174] setting up certificates
	I0626 20:39:16.122101   44794 provision.go:83] configureAuth start
	I0626 20:39:16.122109   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetMachineName
	I0626 20:39:16.122379   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetIP
	I0626 20:39:16.125254   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.125762   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:16.125798   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.125931   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:16.128410   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.128760   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:16.128780   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.128918   44794 provision.go:138] copyHostCerts
	I0626 20:39:16.128982   44794 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:39:16.128993   44794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:39:16.129069   44794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:39:16.129172   44794 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:39:16.129180   44794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:39:16.129204   44794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:39:16.129343   44794 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:39:16.129357   44794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:39:16.129412   44794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:39:16.129506   44794 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-149180 san=[192.168.61.177 192.168.61.177 localhost 127.0.0.1 minikube running-upgrade-149180]
	I0626 20:39:16.228573   44794 provision.go:172] copyRemoteCerts
	I0626 20:39:16.228641   44794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:39:16.228663   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:16.231635   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.231989   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:16.232033   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.232242   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:16.232439   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:16.232615   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:16.232746   44794 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/running-upgrade-149180/id_rsa Username:docker}
	I0626 20:39:16.324268   44794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:39:16.342705   44794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:39:16.359790   44794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:39:16.377077   44794 provision.go:86] duration metric: configureAuth took 254.950129ms
	I0626 20:39:16.377103   44794 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:39:16.377279   44794 config.go:182] Loaded profile config "running-upgrade-149180": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0626 20:39:16.377395   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:16.380513   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.380971   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:16.381004   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:16.381146   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:16.381364   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:16.381567   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:16.381753   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:16.381957   44794 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:16.382591   44794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I0626 20:39:16.382623   44794 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:39:17.092158   44794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:39:17.092186   44794 machine.go:91] provisioned docker machine in 1.2689261s
	I0626 20:39:17.092199   44794 start.go:300] post-start starting for "running-upgrade-149180" (driver="kvm2")
	I0626 20:39:17.092211   44794 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:39:17.092246   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:17.092612   44794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:39:17.092649   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:17.095871   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:17.096338   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:17.096364   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:17.096622   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:17.096797   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:17.096992   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:17.097173   44794 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/running-upgrade-149180/id_rsa Username:docker}
	I0626 20:39:17.196591   44794 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:39:17.201165   44794 info.go:137] Remote host: Buildroot 2019.02.7
	I0626 20:39:17.201188   44794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:39:17.201252   44794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:39:17.201352   44794 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:39:17.201493   44794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:39:17.209272   44794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:39:17.228605   44794 start.go:303] post-start completed in 136.395047ms
	I0626 20:39:17.228627   44794 fix.go:56] fixHost completed within 1.430264593s
	I0626 20:39:17.228644   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:18.367508   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.442262   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:18.442319   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.442522   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:18.442756   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:18.442951   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:18.443115   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:18.443290   44794 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:18.443918   44794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.177 22 <nil> <nil>}
	I0626 20:39:18.443939   44794 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0626 20:39:18.573914   44794 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687811958.570828641
	
	I0626 20:39:18.573939   44794 fix.go:206] guest clock: 1687811958.570828641
	I0626 20:39:18.573948   44794 fix.go:219] Guest: 2023-06-26 20:39:18.570828641 +0000 UTC Remote: 2023-06-26 20:39:17.228630755 +0000 UTC m=+27.372437301 (delta=1.342197886s)
	I0626 20:39:18.573965   44794 fix.go:190] guest clock delta is within tolerance: 1.342197886s
	I0626 20:39:18.573970   44794 start.go:83] releasing machines lock for "running-upgrade-149180", held for 2.775649787s
	I0626 20:39:18.573992   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:18.574244   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetIP
	I0626 20:39:18.577363   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.577886   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:18.577921   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.578091   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:18.578667   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:18.578881   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .DriverName
	I0626 20:39:18.578954   44794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:39:18.579006   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:18.579138   44794 ssh_runner.go:195] Run: cat /version.json
	I0626 20:39:18.579162   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHHostname
	I0626 20:39:18.581860   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.582234   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:18.582264   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.582291   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.582439   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:18.582638   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:18.582691   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:9b:20", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:37:16 +0000 UTC Type:0 Mac:52:54:00:08:9b:20 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:running-upgrade-149180 Clientid:01:52:54:00:08:9b:20}
	I0626 20:39:18.582715   44794 main.go:141] libmachine: (running-upgrade-149180) DBG | domain running-upgrade-149180 has defined IP address 192.168.61.177 and MAC address 52:54:00:08:9b:20 in network minikube-net
	I0626 20:39:18.582780   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:18.582879   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHPort
	I0626 20:39:18.582894   44794 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/running-upgrade-149180/id_rsa Username:docker}
	I0626 20:39:18.583047   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHKeyPath
	I0626 20:39:18.583176   44794 main.go:141] libmachine: (running-upgrade-149180) Calling .GetSSHUsername
	I0626 20:39:18.583310   44794 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/running-upgrade-149180/id_rsa Username:docker}
	W0626 20:39:18.698443   44794 start.go:493] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0626 20:39:18.698521   44794 ssh_runner.go:195] Run: systemctl --version
	I0626 20:39:18.703675   44794 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:39:18.793464   44794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:39:18.799670   44794 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:39:18.799770   44794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:39:18.805045   44794 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0626 20:39:18.805070   44794 start.go:466] detecting cgroup driver to use...
	I0626 20:39:18.805136   44794 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:39:18.817852   44794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:39:18.831957   44794 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:39:18.832044   44794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:39:18.843845   44794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:39:18.855773   44794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0626 20:39:18.865170   44794 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0626 20:39:18.865248   44794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:39:19.017649   44794 docker.go:212] disabling docker service ...
	I0626 20:39:19.017730   44794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:39:20.059088   44794 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.041325323s)
	I0626 20:39:20.059171   44794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:39:20.075647   44794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:39:20.207584   44794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:39:20.378263   44794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:39:20.390261   44794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:39:20.405475   44794 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:39:20.405553   44794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:39:20.415868   44794 out.go:177] 
	W0626 20:39:20.417541   44794 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0626 20:39:20.417568   44794 out.go:239] * 
	* 
	W0626 20:39:20.418679   44794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 20:39:20.420846   44794 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-149180 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-26 20:39:20.440952489 +0000 UTC m=+3824.940980313
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-149180 -n running-upgrade-149180
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-149180 -n running-upgrade-149180: exit status 4 (251.909539ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:39:20.663700   45203 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-149180" does not appear in /home/jenkins/minikube-integration/16761-7242/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-149180" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-149180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-149180
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-149180: (1.299060358s)
--- FAIL: TestRunningBinaryUpgrade (157.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (273.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.93940681.exe start -p stopped-upgrade-123924 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.93940681.exe start -p stopped-upgrade-123924 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m19.09824979s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.93940681.exe -p stopped-upgrade-123924 stop
E0626 20:38:30.704873   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.93940681.exe -p stopped-upgrade-123924 stop: (1m32.768627651s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-123924 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0626 20:38:43.873666   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-123924 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (41.909845776s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-123924] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-123924 in cluster stopped-upgrade-123924
	* Restarting existing kvm2 VM for "stopped-upgrade-123924" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:38:34.612604   44662 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:38:34.612723   44662 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:38:34.612731   44662 out.go:309] Setting ErrFile to fd 2...
	I0626 20:38:34.612735   44662 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:38:34.612837   44662 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:38:34.613346   44662 out.go:303] Setting JSON to false
	I0626 20:38:34.614294   44662 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4862,"bootTime":1687807053,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:38:34.614353   44662 start.go:137] virtualization: kvm guest
	I0626 20:38:34.616792   44662 out.go:177] * [stopped-upgrade-123924] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:38:34.618413   44662 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:38:34.618409   44662 notify.go:220] Checking for updates...
	I0626 20:38:34.620111   44662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:38:34.621622   44662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:38:34.623109   44662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:38:34.624538   44662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:38:34.626597   44662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:38:34.628516   44662 config.go:182] Loaded profile config "stopped-upgrade-123924": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0626 20:38:34.628532   44662 start_flags.go:683] config upgrade: Driver=kvm2
	I0626 20:38:34.628539   44662 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 20:38:34.628599   44662 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/stopped-upgrade-123924/config.json ...
	I0626 20:38:34.629066   44662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:38:34.629110   44662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:38:34.644065   44662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0626 20:38:34.644479   44662 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:38:34.645218   44662 main.go:141] libmachine: Using API Version  1
	I0626 20:38:34.645254   44662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:38:34.645691   44662 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:38:34.645899   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:38:34.647971   44662 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0626 20:38:34.649586   44662 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:38:34.649921   44662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:38:34.649963   44662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:38:34.664589   44662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44905
	I0626 20:38:34.664965   44662 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:38:34.665478   44662 main.go:141] libmachine: Using API Version  1
	I0626 20:38:34.665508   44662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:38:34.665853   44662 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:38:34.666028   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:38:34.704002   44662 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:38:34.705484   44662 start.go:297] selected driver: kvm2
	I0626 20:38:34.705496   44662 start.go:954] validating driver "kvm2" against &{Name:stopped-upgrade-123924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.17 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0}
	I0626 20:38:34.705610   44662 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:38:34.706403   44662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:34.706477   44662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:38:34.720159   44662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:38:34.720439   44662 cni.go:84] Creating CNI manager for ""
	I0626 20:38:34.720454   44662 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0626 20:38:34.720462   44662 start_flags.go:319] config:
	{Name:stopped-upgrade-123924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.17 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:38:34.720612   44662 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:34.722563   44662 out.go:177] * Starting control plane node stopped-upgrade-123924 in cluster stopped-upgrade-123924
	I0626 20:38:34.723960   44662 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0626 20:38:35.188655   44662 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0626 20:38:35.188771   44662 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/stopped-upgrade-123924/config.json ...
	I0626 20:38:35.188921   44662 cache.go:107] acquiring lock: {Name:mk8d1332847006819a7642bceadcaa87888dbfdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.188961   44662 cache.go:107] acquiring lock: {Name:mkbd9bf2ab822f4747a94c056403dc94c1288741 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189006   44662 start.go:365] acquiring machines lock for stopped-upgrade-123924: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:38:35.189029   44662 cache.go:115] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0626 20:38:35.189028   44662 cache.go:107] acquiring lock: {Name:mk50eb46530de295c5d8822f2de0f23681e0187a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189080   44662 start.go:369] acquired machines lock for "stopped-upgrade-123924" in 50.459µs
	I0626 20:38:35.189047   44662 cache.go:107] acquiring lock: {Name:mk6fd4a6598399dd501c4c3fc9ce705962aee7bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189103   44662 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:38:35.189118   44662 fix.go:54] fixHost starting: minikube
	I0626 20:38:35.189105   44662 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0626 20:38:35.189082   44662 cache.go:107] acquiring lock: {Name:mk5b502582092d23fe1bf8c1351df859355c2e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189122   44662 cache.go:107] acquiring lock: {Name:mkd6117b14b81be5748162c26807ea4beb565974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189046   44662 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 133.14µs
	I0626 20:38:35.189192   44662 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0626 20:38:35.188928   44662 cache.go:107] acquiring lock: {Name:mk5aef23d3ca13038e1832621821d0f29a463f87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189214   44662 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0626 20:38:35.189231   44662 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0626 20:38:35.189268   44662 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0626 20:38:35.189247   44662 cache.go:107] acquiring lock: {Name:mk3d63df4a91b5f9d18276b19e52ae460a6a3874 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:38:35.189326   44662 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0626 20:38:35.189360   44662 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0626 20:38:35.189401   44662 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0626 20:38:35.189553   44662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:38:35.189587   44662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:38:35.190392   44662 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0626 20:38:35.190389   44662 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0626 20:38:35.190407   44662 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0626 20:38:35.190416   44662 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0626 20:38:35.190387   44662 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0626 20:38:35.190429   44662 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0626 20:38:35.190441   44662 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0626 20:38:35.206018   44662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I0626 20:38:35.206393   44662 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:38:35.206805   44662 main.go:141] libmachine: Using API Version  1
	I0626 20:38:35.206824   44662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:38:35.207114   44662 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:38:35.207295   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:38:35.207434   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetState
	I0626 20:38:35.209005   44662 fix.go:102] recreateIfNeeded on stopped-upgrade-123924: state=Stopped err=<nil>
	I0626 20:38:35.209042   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	W0626 20:38:35.209192   44662 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:38:35.211331   44662 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-123924" ...
	I0626 20:38:35.212776   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .Start
	I0626 20:38:35.212940   44662 main.go:141] libmachine: (stopped-upgrade-123924) Ensuring networks are active...
	I0626 20:38:35.213737   44662 main.go:141] libmachine: (stopped-upgrade-123924) Ensuring network default is active
	I0626 20:38:35.214126   44662 main.go:141] libmachine: (stopped-upgrade-123924) Ensuring network minikube-net is active
	I0626 20:38:35.214543   44662 main.go:141] libmachine: (stopped-upgrade-123924) Getting domain xml...
	I0626 20:38:35.215182   44662 main.go:141] libmachine: (stopped-upgrade-123924) Creating domain...
	I0626 20:38:35.364496   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0626 20:38:35.379399   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0626 20:38:35.396830   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0626 20:38:35.456988   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0626 20:38:35.457014   44662 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 267.971964ms
	I0626 20:38:35.457029   44662 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0626 20:38:35.467444   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0626 20:38:35.479196   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0626 20:38:35.512505   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0626 20:38:35.535190   44662 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0626 20:38:36.311621   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0626 20:38:36.311647   44662 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 1.122557089s
	I0626 20:38:36.311662   44662 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0626 20:38:36.733018   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0626 20:38:36.733043   44662 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.544082425s
	I0626 20:38:36.733059   44662 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0626 20:38:36.781956   44662 main.go:141] libmachine: (stopped-upgrade-123924) Waiting to get IP...
	I0626 20:38:36.783513   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:38:36.783699   44662 main.go:141] libmachine: (stopped-upgrade-123924) Found IP for machine: 192.168.61.17
	I0626 20:38:36.783724   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has current primary IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:38:36.783733   44662 main.go:141] libmachine: (stopped-upgrade-123924) Reserving static IP address...
	I0626 20:38:36.784465   44662 main.go:141] libmachine: (stopped-upgrade-123924) Reserved static IP address: 192.168.61.17
	I0626 20:38:36.784492   44662 main.go:141] libmachine: (stopped-upgrade-123924) Waiting for SSH to be available...
	I0626 20:38:36.784515   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "stopped-upgrade-123924", mac: "52:54:00:02:93:c8", ip: "192.168.61.17"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:38:36.784543   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-123924", mac: "52:54:00:02:93:c8", ip: "192.168.61.17"}
	I0626 20:38:36.784560   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Getting to WaitForSSH function...
	I0626 20:38:36.787705   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:38:36.788183   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:38:36.788221   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:38:36.788396   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Using SSH client type: external
	I0626 20:38:36.788418   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa (-rw-------)
	I0626 20:38:36.788452   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:38:36.788466   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | About to run SSH command:
	I0626 20:38:36.788478   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | exit 0
	I0626 20:38:37.058806   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0626 20:38:37.058834   44662 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.869822875s
	I0626 20:38:37.058849   44662 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0626 20:38:37.159506   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0626 20:38:37.159533   44662 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.970543963s
	I0626 20:38:37.159548   44662 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0626 20:38:37.450099   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0626 20:38:37.450131   44662 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.261210224s
	I0626 20:38:37.450146   44662 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0626 20:38:37.666416   44662 cache.go:157] /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0626 20:38:37.666447   44662 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.477430542s
	I0626 20:38:37.666462   44662 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0626 20:38:37.666484   44662 cache.go:87] Successfully saved all images to host disk.
	I0626 20:38:53.938191   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | SSH cmd err, output: exit status 255: 
	I0626 20:38:53.938220   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0626 20:38:53.938229   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | command : exit 0
	I0626 20:38:53.938239   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | err     : exit status 255
	I0626 20:38:53.938248   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | output  : 
	I0626 20:38:56.939939   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Getting to WaitForSSH function...
	I0626 20:38:56.943122   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:38:56.943701   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:38:56.943781   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:38:56.943894   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Using SSH client type: external
	I0626 20:38:56.943930   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa (-rw-------)
	I0626 20:38:56.943962   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:38:56.943980   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | About to run SSH command:
	I0626 20:38:56.943990   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | exit 0
	I0626 20:39:02.074892   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | SSH cmd err, output: exit status 255: 
	I0626 20:39:02.074924   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0626 20:39:02.074936   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | command : exit 0
	I0626 20:39:02.074945   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | err     : exit status 255
	I0626 20:39:02.074972   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | output  : 
	I0626 20:39:05.075642   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Getting to WaitForSSH function...
	I0626 20:39:05.078804   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.079269   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.079326   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.079418   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Using SSH client type: external
	I0626 20:39:05.079447   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa (-rw-------)
	I0626 20:39:05.079486   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:39:05.079509   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | About to run SSH command:
	I0626 20:39:05.079521   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | exit 0
	I0626 20:39:05.208941   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | SSH cmd err, output: <nil>: 
	I0626 20:39:05.209308   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetConfigRaw
	I0626 20:39:05.210063   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetIP
	I0626 20:39:05.212430   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.212874   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.212910   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.213129   44662 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/stopped-upgrade-123924/config.json ...
	I0626 20:39:05.213334   44662 machine.go:88] provisioning docker machine ...
	I0626 20:39:05.213354   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:39:05.213545   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetMachineName
	I0626 20:39:05.213793   44662 buildroot.go:166] provisioning hostname "stopped-upgrade-123924"
	I0626 20:39:05.213815   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetMachineName
	I0626 20:39:05.214017   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:05.216134   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.216540   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.216564   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.216729   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:05.216878   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.217033   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.217150   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:05.217272   44662 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:05.217874   44662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0626 20:39:05.217892   44662 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-123924 && echo "stopped-upgrade-123924" | sudo tee /etc/hostname
	I0626 20:39:05.336792   44662 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-123924
	
	I0626 20:39:05.336824   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:05.339738   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.340190   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.340239   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.340432   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:05.340625   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.340807   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.340924   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:05.341074   44662 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:05.341604   44662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0626 20:39:05.341628   44662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-123924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-123924/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-123924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:39:05.458194   44662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:39:05.458223   44662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:39:05.458246   44662 buildroot.go:174] setting up certificates
	I0626 20:39:05.458256   44662 provision.go:83] configureAuth start
	I0626 20:39:05.458268   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetMachineName
	I0626 20:39:05.458563   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetIP
	I0626 20:39:05.461155   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.461620   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.461650   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.461840   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:05.464073   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.464410   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.464445   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.464567   44662 provision.go:138] copyHostCerts
	I0626 20:39:05.464631   44662 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:39:05.464641   44662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:39:05.464705   44662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:39:05.464804   44662 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:39:05.464812   44662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:39:05.464835   44662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:39:05.464939   44662 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:39:05.464948   44662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:39:05.464969   44662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:39:05.465025   44662 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-123924 san=[192.168.61.17 192.168.61.17 localhost 127.0.0.1 minikube stopped-upgrade-123924]
	I0626 20:39:05.669132   44662 provision.go:172] copyRemoteCerts
	I0626 20:39:05.669194   44662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:39:05.669218   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:05.672450   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.672871   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.672897   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.673083   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:05.673282   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.673481   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:05.673663   44662 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa Username:docker}
	I0626 20:39:05.756556   44662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:39:05.772139   44662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:39:05.786567   44662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:39:05.801006   44662 provision.go:86] duration metric: configureAuth took 342.730137ms
	I0626 20:39:05.801033   44662 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:39:05.801209   44662 config.go:182] Loaded profile config "stopped-upgrade-123924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0626 20:39:05.801299   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:05.804181   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.804586   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:05.804617   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:05.804765   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:05.804979   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.805162   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:05.805330   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:05.805515   44662 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:05.805943   44662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0626 20:39:05.805962   44662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:39:15.566648   44662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:39:15.566680   44662 machine.go:91] provisioned docker machine in 10.353331606s
	I0626 20:39:15.566701   44662 start.go:300] post-start starting for "stopped-upgrade-123924" (driver="kvm2")
	I0626 20:39:15.566713   44662 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:39:15.566735   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:39:15.567122   44662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:39:15.567157   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:15.570014   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.570453   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:15.570488   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.570642   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:15.570849   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:15.570983   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:15.571156   44662 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa Username:docker}
	I0626 20:39:15.657219   44662 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:39:15.661695   44662 info.go:137] Remote host: Buildroot 2019.02.7
	I0626 20:39:15.661717   44662 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:39:15.661779   44662 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:39:15.661845   44662 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:39:15.661965   44662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:39:15.668395   44662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:39:15.682952   44662 start.go:303] post-start completed in 116.235599ms
	I0626 20:39:15.682976   44662 fix.go:56] fixHost completed within 40.493861692s
	I0626 20:39:15.683010   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:15.685661   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.686031   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:15.686064   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.686285   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:15.686506   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:15.686706   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:15.686864   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:15.687082   44662 main.go:141] libmachine: Using SSH client type: native
	I0626 20:39:15.687670   44662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.17 22 <nil> <nil>}
	I0626 20:39:15.687697   44662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0626 20:39:15.798118   44662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687811955.746480547
	
	I0626 20:39:15.798141   44662 fix.go:206] guest clock: 1687811955.746480547
	I0626 20:39:15.798147   44662 fix.go:219] Guest: 2023-06-26 20:39:15.746480547 +0000 UTC Remote: 2023-06-26 20:39:15.682980667 +0000 UTC m=+41.103532611 (delta=63.49988ms)
	I0626 20:39:15.798196   44662 fix.go:190] guest clock delta is within tolerance: 63.49988ms
	I0626 20:39:15.798206   44662 start.go:83] releasing machines lock for "stopped-upgrade-123924", held for 40.609113867s
	I0626 20:39:15.798235   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:39:15.798551   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetIP
	I0626 20:39:15.801058   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.801666   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:15.801698   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.801889   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:39:15.802687   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:39:15.802902   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .DriverName
	I0626 20:39:15.802973   44662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:39:15.803035   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:15.803157   44662 ssh_runner.go:195] Run: cat /version.json
	I0626 20:39:15.803205   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHHostname
	I0626 20:39:15.806004   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.806126   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.806376   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:15.806406   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.806543   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:15.806660   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:93:c8", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-06-26 21:35:22 +0000 UTC Type:0 Mac:52:54:00:02:93:c8 Iaid: IPaddr:192.168.61.17 Prefix:24 Hostname:stopped-upgrade-123924 Clientid:01:52:54:00:02:93:c8}
	I0626 20:39:15.806696   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:15.806743   44662 main.go:141] libmachine: (stopped-upgrade-123924) DBG | domain stopped-upgrade-123924 has defined IP address 192.168.61.17 and MAC address 52:54:00:02:93:c8 in network minikube-net
	I0626 20:39:15.806857   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:15.806877   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHPort
	I0626 20:39:15.806994   44662 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa Username:docker}
	I0626 20:39:15.807412   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHKeyPath
	I0626 20:39:15.807545   44662 main.go:141] libmachine: (stopped-upgrade-123924) Calling .GetSSHUsername
	I0626 20:39:15.807681   44662 sshutil.go:53] new ssh client: &{IP:192.168.61.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/stopped-upgrade-123924/id_rsa Username:docker}
	W0626 20:39:15.912782   44662 start.go:493] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0626 20:39:15.912854   44662 ssh_runner.go:195] Run: systemctl --version
	I0626 20:39:15.919344   44662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:39:16.100022   44662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:39:16.105368   44662 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:39:16.105456   44662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:39:16.110278   44662 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0626 20:39:16.110295   44662 start.go:466] detecting cgroup driver to use...
	I0626 20:39:16.110369   44662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:39:16.119807   44662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:39:16.130867   44662 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:39:16.130914   44662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:39:16.140511   44662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:39:16.150278   44662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0626 20:39:16.159994   44662 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0626 20:39:16.160125   44662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:39:16.254380   44662 docker.go:212] disabling docker service ...
	I0626 20:39:16.254469   44662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:39:16.266068   44662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:39:16.274040   44662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:39:16.360058   44662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:39:16.447375   44662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:39:16.457610   44662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:39:16.469898   44662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:39:16.469948   44662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:39:16.478526   44662 out.go:177] 
	W0626 20:39:16.480083   44662 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0626 20:39:16.480106   44662 out.go:239] * 
	* 
	W0626 20:39:16.480968   44662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 20:39:16.482523   44662 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-123924 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (273.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (75.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480285 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-480285 --driver=kvm2  --container-runtime=crio: signal: killed (1m15.136812426s)

                                                
                                                
-- stdout --
	* [NoKubernetes-480285] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-480285
	* Restarting existing kvm2 VM for "NoKubernetes-480285" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-480285 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-480285 -n NoKubernetes-480285
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-480285 -n NoKubernetes-480285: exit status 6 (234.148728ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:36:43.492118   43707 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-480285" does not appear in /home/jenkins/minikube-integration/16761-7242/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-480285" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (75.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-490377 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-490377 --alsologtostderr -v=3: exit status 82 (2m2.289958025s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-490377"  ...
	* Stopping node "old-k8s-version-490377"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:39:06.770683   44961 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:39:06.770917   44961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:39:06.770954   44961 out.go:309] Setting ErrFile to fd 2...
	I0626 20:39:06.770972   44961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:39:06.771164   44961 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:39:06.771517   44961 out.go:303] Setting JSON to false
	I0626 20:39:06.771704   44961 mustload.go:65] Loading cluster: old-k8s-version-490377
	I0626 20:39:06.772190   44961 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:39:06.772356   44961 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/config.json ...
	I0626 20:39:06.772573   44961 mustload.go:65] Loading cluster: old-k8s-version-490377
	I0626 20:39:06.772757   44961 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:39:06.772818   44961 stop.go:39] StopHost: old-k8s-version-490377
	I0626 20:39:06.773398   44961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:39:06.773481   44961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:39:06.789329   44961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0626 20:39:06.789981   44961 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:39:06.790678   44961 main.go:141] libmachine: Using API Version  1
	I0626 20:39:06.790698   44961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:39:06.791129   44961 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:39:06.793516   44961 out.go:177] * Stopping node "old-k8s-version-490377"  ...
	I0626 20:39:06.794928   44961 main.go:141] libmachine: Stopping "old-k8s-version-490377"...
	I0626 20:39:06.794959   44961 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:39:06.797166   44961 main.go:141] libmachine: (old-k8s-version-490377) Calling .Stop
	I0626 20:39:06.800874   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 0/60
	I0626 20:39:07.802411   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 1/60
	I0626 20:39:08.804017   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 2/60
	I0626 20:39:09.805505   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 3/60
	I0626 20:39:10.806771   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 4/60
	I0626 20:39:11.809160   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 5/60
	I0626 20:39:12.810670   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 6/60
	I0626 20:39:13.812219   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 7/60
	I0626 20:39:14.814701   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 8/60
	I0626 20:39:15.816511   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 9/60
	I0626 20:39:16.818740   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 10/60
	I0626 20:39:18.364340   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 11/60
	I0626 20:39:19.366005   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 12/60
	I0626 20:39:20.367576   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 13/60
	I0626 20:39:21.521962   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 14/60
	I0626 20:39:22.523918   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 15/60
	I0626 20:39:23.525422   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 16/60
	I0626 20:39:24.527102   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 17/60
	I0626 20:39:25.528649   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 18/60
	I0626 20:39:26.530333   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 19/60
	I0626 20:39:27.532870   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 20/60
	I0626 20:39:28.534429   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 21/60
	I0626 20:39:29.535869   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 22/60
	I0626 20:39:30.538241   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 23/60
	I0626 20:39:31.539955   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 24/60
	I0626 20:39:32.541878   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 25/60
	I0626 20:39:33.543854   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 26/60
	I0626 20:39:34.545406   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 27/60
	I0626 20:39:35.546891   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 28/60
	I0626 20:39:36.548409   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 29/60
	I0626 20:39:37.770723   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 30/60
	I0626 20:39:38.772213   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 31/60
	I0626 20:39:39.773808   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 32/60
	I0626 20:39:40.775767   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 33/60
	I0626 20:39:41.777199   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 34/60
	I0626 20:39:42.779007   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 35/60
	I0626 20:39:43.780247   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 36/60
	I0626 20:39:44.781712   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 37/60
	I0626 20:39:45.783015   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 38/60
	I0626 20:39:46.784386   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 39/60
	I0626 20:39:47.786437   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 40/60
	I0626 20:39:48.788219   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 41/60
	I0626 20:39:49.789760   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 42/60
	I0626 20:39:50.792156   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 43/60
	I0626 20:39:51.793704   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 44/60
	I0626 20:39:52.795887   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 45/60
	I0626 20:39:53.797235   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 46/60
	I0626 20:39:54.798750   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 47/60
	I0626 20:39:55.800253   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 48/60
	I0626 20:39:56.802199   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 49/60
	I0626 20:39:57.804724   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 50/60
	I0626 20:39:58.806262   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 51/60
	I0626 20:39:59.808042   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 52/60
	I0626 20:40:00.810034   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 53/60
	I0626 20:40:01.812059   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 54/60
	I0626 20:40:02.813672   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 55/60
	I0626 20:40:03.815886   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 56/60
	I0626 20:40:04.817355   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 57/60
	I0626 20:40:05.818840   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 58/60
	I0626 20:40:06.820968   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 59/60
	I0626 20:40:07.821526   44961 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:40:07.821588   44961 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:40:07.821609   44961 retry.go:31] will retry after 1.038953611s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:40:08.860735   44961 stop.go:39] StopHost: old-k8s-version-490377
	I0626 20:40:08.861079   44961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:40:08.861120   44961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:40:08.875132   44961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0626 20:40:08.875548   44961 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:40:08.876029   44961 main.go:141] libmachine: Using API Version  1
	I0626 20:40:08.876070   44961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:40:08.876561   44961 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:40:08.879167   44961 out.go:177] * Stopping node "old-k8s-version-490377"  ...
	I0626 20:40:08.880673   44961 main.go:141] libmachine: Stopping "old-k8s-version-490377"...
	I0626 20:40:08.880696   44961 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:40:08.882602   44961 main.go:141] libmachine: (old-k8s-version-490377) Calling .Stop
	I0626 20:40:08.886376   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 0/60
	I0626 20:40:09.888181   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 1/60
	I0626 20:40:10.889860   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 2/60
	I0626 20:40:11.892285   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 3/60
	I0626 20:40:12.893717   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 4/60
	I0626 20:40:13.895678   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 5/60
	I0626 20:40:14.897659   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 6/60
	I0626 20:40:15.900352   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 7/60
	I0626 20:40:16.902292   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 8/60
	I0626 20:40:17.904115   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 9/60
	I0626 20:40:18.906605   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 10/60
	I0626 20:40:19.909004   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 11/60
	I0626 20:40:20.910689   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 12/60
	I0626 20:40:21.912335   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 13/60
	I0626 20:40:22.914655   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 14/60
	I0626 20:40:23.916317   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 15/60
	I0626 20:40:24.918693   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 16/60
	I0626 20:40:25.920374   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 17/60
	I0626 20:40:26.921766   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 18/60
	I0626 20:40:27.923211   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 19/60
	I0626 20:40:28.924942   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 20/60
	I0626 20:40:29.926995   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 21/60
	I0626 20:40:30.928570   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 22/60
	I0626 20:40:31.930418   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 23/60
	I0626 20:40:32.931988   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 24/60
	I0626 20:40:33.933806   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 25/60
	I0626 20:40:34.935379   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 26/60
	I0626 20:40:35.937078   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 27/60
	I0626 20:40:36.938581   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 28/60
	I0626 20:40:37.940063   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 29/60
	I0626 20:40:38.941598   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 30/60
	I0626 20:40:39.944225   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 31/60
	I0626 20:40:40.945694   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 32/60
	I0626 20:40:41.948343   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 33/60
	I0626 20:40:42.950447   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 34/60
	I0626 20:40:43.952792   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 35/60
	I0626 20:40:44.954828   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 36/60
	I0626 20:40:45.956513   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 37/60
	I0626 20:40:46.958286   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 38/60
	I0626 20:40:47.960168   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 39/60
	I0626 20:40:48.961646   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 40/60
	I0626 20:40:49.963135   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 41/60
	I0626 20:40:50.964520   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 42/60
	I0626 20:40:51.966029   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 43/60
	I0626 20:40:52.967984   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 44/60
	I0626 20:40:53.969815   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 45/60
	I0626 20:40:54.971828   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 46/60
	I0626 20:40:55.973038   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 47/60
	I0626 20:40:56.974851   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 48/60
	I0626 20:40:57.976726   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 49/60
	I0626 20:40:58.978893   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 50/60
	I0626 20:40:59.980153   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 51/60
	I0626 20:41:00.981476   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 52/60
	I0626 20:41:01.982829   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 53/60
	I0626 20:41:02.984094   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 54/60
	I0626 20:41:03.986125   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 55/60
	I0626 20:41:04.987399   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 56/60
	I0626 20:41:05.988815   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 57/60
	I0626 20:41:06.990176   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 58/60
	I0626 20:41:07.991819   44961 main.go:141] libmachine: (old-k8s-version-490377) Waiting for machine to stop 59/60
	I0626 20:41:08.992644   44961 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:41:08.992692   44961 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:41:08.994724   44961 out.go:177] 
	W0626 20:41:08.996321   44961 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0626 20:41:08.996339   44961 out.go:239] * 
	* 
	W0626 20:41:08.998563   44961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 20:41:08.999949   44961 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-490377 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377: exit status 3 (18.439292256s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:41:27.441645   46427 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	E0626 20:41:27.441670   46427 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-490377" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-934450 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-934450 --alsologtostderr -v=3: exit status 82 (2m1.62480464s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-934450"  ...
	* Stopping node "no-preload-934450"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:40:57.018856   46388 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:40:57.019017   46388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:40:57.019029   46388 out.go:309] Setting ErrFile to fd 2...
	I0626 20:40:57.019034   46388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:40:57.019175   46388 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:40:57.019468   46388 out.go:303] Setting JSON to false
	I0626 20:40:57.019567   46388 mustload.go:65] Loading cluster: no-preload-934450
	I0626 20:40:57.019999   46388 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:40:57.020112   46388 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/config.json ...
	I0626 20:40:57.020319   46388 mustload.go:65] Loading cluster: no-preload-934450
	I0626 20:40:57.020470   46388 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:40:57.020508   46388 stop.go:39] StopHost: no-preload-934450
	I0626 20:40:57.021058   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:40:57.021099   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:40:57.035297   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0626 20:40:57.035716   46388 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:40:57.036393   46388 main.go:141] libmachine: Using API Version  1
	I0626 20:40:57.036430   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:40:57.036848   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:40:57.039215   46388 out.go:177] * Stopping node "no-preload-934450"  ...
	I0626 20:40:57.040719   46388 main.go:141] libmachine: Stopping "no-preload-934450"...
	I0626 20:40:57.040743   46388 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:40:57.042678   46388 main.go:141] libmachine: (no-preload-934450) Calling .Stop
	I0626 20:40:57.046084   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 0/60
	I0626 20:40:58.048239   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 1/60
	I0626 20:40:59.050593   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 2/60
	I0626 20:41:00.052309   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 3/60
	I0626 20:41:01.054280   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 4/60
	I0626 20:41:02.056185   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 5/60
	I0626 20:41:03.058051   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 6/60
	I0626 20:41:04.059855   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 7/60
	I0626 20:41:05.061985   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 8/60
	I0626 20:41:06.063534   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 9/60
	I0626 20:41:07.065995   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 10/60
	I0626 20:41:08.068053   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 11/60
	I0626 20:41:09.069282   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 12/60
	I0626 20:41:10.070722   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 13/60
	I0626 20:41:11.072359   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 14/60
	I0626 20:41:12.074528   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 15/60
	I0626 20:41:13.075986   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 16/60
	I0626 20:41:14.077788   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 17/60
	I0626 20:41:15.079978   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 18/60
	I0626 20:41:16.081292   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 19/60
	I0626 20:41:17.083348   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 20/60
	I0626 20:41:18.085712   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 21/60
	I0626 20:41:19.087749   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 22/60
	I0626 20:41:20.089402   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 23/60
	I0626 20:41:21.090707   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 24/60
	I0626 20:41:22.092834   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 25/60
	I0626 20:41:23.094643   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 26/60
	I0626 20:41:24.096056   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 27/60
	I0626 20:41:25.097405   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 28/60
	I0626 20:41:26.099038   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 29/60
	I0626 20:41:27.101126   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 30/60
	I0626 20:41:28.102616   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 31/60
	I0626 20:41:29.104241   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 32/60
	I0626 20:41:30.105936   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 33/60
	I0626 20:41:31.108036   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 34/60
	I0626 20:41:32.110490   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 35/60
	I0626 20:41:33.111964   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 36/60
	I0626 20:41:34.113162   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 37/60
	I0626 20:41:35.114738   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 38/60
	I0626 20:41:36.116544   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 39/60
	I0626 20:41:37.118181   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 40/60
	I0626 20:41:38.120267   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 41/60
	I0626 20:41:39.121867   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 42/60
	I0626 20:41:40.124191   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 43/60
	I0626 20:41:41.125703   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 44/60
	I0626 20:41:42.127642   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 45/60
	I0626 20:41:43.128929   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 46/60
	I0626 20:41:44.130347   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 47/60
	I0626 20:41:45.131659   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 48/60
	I0626 20:41:46.133111   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 49/60
	I0626 20:41:47.135177   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 50/60
	I0626 20:41:48.136773   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 51/60
	I0626 20:41:49.138577   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 52/60
	I0626 20:41:50.140211   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 53/60
	I0626 20:41:51.141753   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 54/60
	I0626 20:41:52.143931   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 55/60
	I0626 20:41:53.145396   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 56/60
	I0626 20:41:54.146777   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 57/60
	I0626 20:41:55.148517   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 58/60
	I0626 20:41:56.149915   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 59/60
	I0626 20:41:57.151265   46388 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:41:57.151397   46388 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:41:57.151426   46388 retry.go:31] will retry after 1.329720639s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:41:58.481880   46388 stop.go:39] StopHost: no-preload-934450
	I0626 20:41:58.482258   46388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:41:58.482307   46388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:41:58.496574   46388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0626 20:41:58.496958   46388 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:41:58.497525   46388 main.go:141] libmachine: Using API Version  1
	I0626 20:41:58.497566   46388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:41:58.497911   46388 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:41:58.499876   46388 out.go:177] * Stopping node "no-preload-934450"  ...
	I0626 20:41:58.501492   46388 main.go:141] libmachine: Stopping "no-preload-934450"...
	I0626 20:41:58.501509   46388 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:41:58.503225   46388 main.go:141] libmachine: (no-preload-934450) Calling .Stop
	I0626 20:41:58.506606   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 0/60
	I0626 20:41:59.508936   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 1/60
	I0626 20:42:00.510270   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 2/60
	I0626 20:42:01.512295   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 3/60
	I0626 20:42:02.513654   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 4/60
	I0626 20:42:03.515354   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 5/60
	I0626 20:42:04.516759   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 6/60
	I0626 20:42:05.519063   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 7/60
	I0626 20:42:06.521236   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 8/60
	I0626 20:42:07.522644   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 9/60
	I0626 20:42:08.524367   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 10/60
	I0626 20:42:09.525809   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 11/60
	I0626 20:42:10.527083   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 12/60
	I0626 20:42:11.528586   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 13/60
	I0626 20:42:12.529932   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 14/60
	I0626 20:42:13.531778   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 15/60
	I0626 20:42:14.533150   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 16/60
	I0626 20:42:15.534331   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 17/60
	I0626 20:42:16.535755   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 18/60
	I0626 20:42:17.537111   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 19/60
	I0626 20:42:18.538833   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 20/60
	I0626 20:42:19.540205   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 21/60
	I0626 20:42:20.541387   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 22/60
	I0626 20:42:21.542813   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 23/60
	I0626 20:42:22.544126   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 24/60
	I0626 20:42:23.545838   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 25/60
	I0626 20:42:24.547076   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 26/60
	I0626 20:42:25.548245   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 27/60
	I0626 20:42:26.549608   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 28/60
	I0626 20:42:27.550915   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 29/60
	I0626 20:42:28.552420   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 30/60
	I0626 20:42:29.553795   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 31/60
	I0626 20:42:30.555097   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 32/60
	I0626 20:42:31.556554   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 33/60
	I0626 20:42:32.558065   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 34/60
	I0626 20:42:33.559679   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 35/60
	I0626 20:42:34.561291   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 36/60
	I0626 20:42:35.562560   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 37/60
	I0626 20:42:36.563950   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 38/60
	I0626 20:42:37.565444   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 39/60
	I0626 20:42:38.567315   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 40/60
	I0626 20:42:39.568677   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 41/60
	I0626 20:42:40.570302   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 42/60
	I0626 20:42:41.571694   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 43/60
	I0626 20:42:42.572941   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 44/60
	I0626 20:42:43.574729   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 45/60
	I0626 20:42:44.576170   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 46/60
	I0626 20:42:45.577809   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 47/60
	I0626 20:42:46.579487   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 48/60
	I0626 20:42:47.580844   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 49/60
	I0626 20:42:48.582678   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 50/60
	I0626 20:42:49.584053   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 51/60
	I0626 20:42:50.585511   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 52/60
	I0626 20:42:51.587000   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 53/60
	I0626 20:42:52.588284   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 54/60
	I0626 20:42:53.590067   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 55/60
	I0626 20:42:54.591309   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 56/60
	I0626 20:42:55.592647   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 57/60
	I0626 20:42:56.594006   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 58/60
	I0626 20:42:57.595322   46388 main.go:141] libmachine: (no-preload-934450) Waiting for machine to stop 59/60
	I0626 20:42:58.596242   46388 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:42:58.596283   46388 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:42:58.598325   46388 out.go:177] 
	W0626 20:42:58.600098   46388 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0626 20:42:58.600111   46388 out.go:239] * 
	* 
	W0626 20:42:58.602382   46388 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 20:42:58.603841   46388 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-934450 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450
E0626 20:43:13.752404   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450: exit status 3 (18.660495974s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:43:17.265662   47130 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host
	E0626 20:43:17.265683   47130 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-934450" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377: exit status 3 (3.168179166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:41:30.609708   46584 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	E0626 20:41:30.609727   46584 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-490377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-490377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 10 (3.081559647s)

                                                
                                                
-- stdout --
	* dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-490377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377: exit status 3 (3.063096187s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:41:36.753707   46653 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	E0626 20:41:36.753728   46653 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-490377" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-299839 --alsologtostderr -v=3
E0626 20:41:48.326206   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-299839 --alsologtostderr -v=3: exit status 82 (2m1.433907904s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-299839"  ...
	* Stopping node "embed-certs-299839"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:41:39.757206   46779 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:41:39.757355   46779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:41:39.757367   46779 out.go:309] Setting ErrFile to fd 2...
	I0626 20:41:39.757390   46779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:41:39.757525   46779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:41:39.757832   46779 out.go:303] Setting JSON to false
	I0626 20:41:39.757931   46779 mustload.go:65] Loading cluster: embed-certs-299839
	I0626 20:41:39.758279   46779 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:41:39.758375   46779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/config.json ...
	I0626 20:41:39.758573   46779 mustload.go:65] Loading cluster: embed-certs-299839
	I0626 20:41:39.758704   46779 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:41:39.758746   46779 stop.go:39] StopHost: embed-certs-299839
	I0626 20:41:39.759128   46779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:41:39.759189   46779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:41:39.774057   46779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35559
	I0626 20:41:39.774513   46779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:41:39.775109   46779 main.go:141] libmachine: Using API Version  1
	I0626 20:41:39.775133   46779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:41:39.775504   46779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:41:39.778773   46779 out.go:177] * Stopping node "embed-certs-299839"  ...
	I0626 20:41:39.780180   46779 main.go:141] libmachine: Stopping "embed-certs-299839"...
	I0626 20:41:39.780226   46779 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:41:39.781930   46779 main.go:141] libmachine: (embed-certs-299839) Calling .Stop
	I0626 20:41:39.785324   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 0/60
	I0626 20:41:40.786701   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 1/60
	I0626 20:41:41.788109   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 2/60
	I0626 20:41:42.789670   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 3/60
	I0626 20:41:43.791781   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 4/60
	I0626 20:41:44.793831   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 5/60
	I0626 20:41:45.795940   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 6/60
	I0626 20:41:46.797297   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 7/60
	I0626 20:41:47.798669   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 8/60
	I0626 20:41:48.800143   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 9/60
	I0626 20:41:49.801637   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 10/60
	I0626 20:41:50.803059   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 11/60
	I0626 20:41:51.804572   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 12/60
	I0626 20:41:52.806181   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 13/60
	I0626 20:41:53.807771   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 14/60
	I0626 20:41:54.809978   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 15/60
	I0626 20:41:55.811498   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 16/60
	I0626 20:41:56.812985   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 17/60
	I0626 20:41:57.814668   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 18/60
	I0626 20:41:58.816255   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 19/60
	I0626 20:41:59.818800   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 20/60
	I0626 20:42:00.820188   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 21/60
	I0626 20:42:01.822279   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 22/60
	I0626 20:42:02.823667   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 23/60
	I0626 20:42:03.825062   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 24/60
	I0626 20:42:04.827419   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 25/60
	I0626 20:42:05.828946   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 26/60
	I0626 20:42:06.830282   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 27/60
	I0626 20:42:07.831797   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 28/60
	I0626 20:42:08.833291   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 29/60
	I0626 20:42:09.835339   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 30/60
	I0626 20:42:10.836836   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 31/60
	I0626 20:42:11.838309   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 32/60
	I0626 20:42:12.839753   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 33/60
	I0626 20:42:13.841137   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 34/60
	I0626 20:42:14.842924   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 35/60
	I0626 20:42:15.844204   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 36/60
	I0626 20:42:16.845493   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 37/60
	I0626 20:42:17.847414   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 38/60
	I0626 20:42:18.848867   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 39/60
	I0626 20:42:19.850949   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 40/60
	I0626 20:42:20.852476   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 41/60
	I0626 20:42:21.853734   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 42/60
	I0626 20:42:22.855873   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 43/60
	I0626 20:42:23.857238   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 44/60
	I0626 20:42:24.858979   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 45/60
	I0626 20:42:25.860462   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 46/60
	I0626 20:42:26.861743   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 47/60
	I0626 20:42:27.864063   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 48/60
	I0626 20:42:28.865325   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 49/60
	I0626 20:42:29.867408   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 50/60
	I0626 20:42:30.868730   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 51/60
	I0626 20:42:31.870451   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 52/60
	I0626 20:42:32.871824   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 53/60
	I0626 20:42:33.873194   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 54/60
	I0626 20:42:34.874842   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 55/60
	I0626 20:42:35.876331   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 56/60
	I0626 20:42:36.877606   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 57/60
	I0626 20:42:37.878984   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 58/60
	I0626 20:42:38.880317   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 59/60
	I0626 20:42:39.881631   46779 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:42:39.881677   46779 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:42:39.881693   46779 retry.go:31] will retry after 1.147290689s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:42:41.029438   46779 stop.go:39] StopHost: embed-certs-299839
	I0626 20:42:41.029796   46779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:42:41.029844   46779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:42:41.044370   46779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0626 20:42:41.044803   46779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:42:41.045241   46779 main.go:141] libmachine: Using API Version  1
	I0626 20:42:41.045266   46779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:42:41.045624   46779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:42:41.047518   46779 out.go:177] * Stopping node "embed-certs-299839"  ...
	I0626 20:42:41.048736   46779 main.go:141] libmachine: Stopping "embed-certs-299839"...
	I0626 20:42:41.048755   46779 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:42:41.050376   46779 main.go:141] libmachine: (embed-certs-299839) Calling .Stop
	I0626 20:42:41.053728   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 0/60
	I0626 20:42:42.055798   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 1/60
	I0626 20:42:43.057238   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 2/60
	I0626 20:42:44.058629   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 3/60
	I0626 20:42:45.060240   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 4/60
	I0626 20:42:46.062309   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 5/60
	I0626 20:42:47.063987   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 6/60
	I0626 20:42:48.065579   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 7/60
	I0626 20:42:49.066992   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 8/60
	I0626 20:42:50.068374   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 9/60
	I0626 20:42:51.070269   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 10/60
	I0626 20:42:52.071696   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 11/60
	I0626 20:42:53.073054   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 12/60
	I0626 20:42:54.074532   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 13/60
	I0626 20:42:55.075840   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 14/60
	I0626 20:42:56.077405   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 15/60
	I0626 20:42:57.078798   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 16/60
	I0626 20:42:58.080256   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 17/60
	I0626 20:42:59.081650   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 18/60
	I0626 20:43:00.083719   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 19/60
	I0626 20:43:01.085748   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 20/60
	I0626 20:43:02.087895   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 21/60
	I0626 20:43:03.089224   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 22/60
	I0626 20:43:04.090607   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 23/60
	I0626 20:43:05.091861   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 24/60
	I0626 20:43:06.093685   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 25/60
	I0626 20:43:07.095836   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 26/60
	I0626 20:43:08.097632   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 27/60
	I0626 20:43:09.098980   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 28/60
	I0626 20:43:10.100487   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 29/60
	I0626 20:43:11.102403   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 30/60
	I0626 20:43:12.103754   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 31/60
	I0626 20:43:13.105093   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 32/60
	I0626 20:43:14.106397   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 33/60
	I0626 20:43:15.107718   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 34/60
	I0626 20:43:16.109448   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 35/60
	I0626 20:43:17.110758   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 36/60
	I0626 20:43:18.112021   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 37/60
	I0626 20:43:19.113522   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 38/60
	I0626 20:43:20.114843   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 39/60
	I0626 20:43:21.116927   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 40/60
	I0626 20:43:22.118514   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 41/60
	I0626 20:43:23.119828   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 42/60
	I0626 20:43:24.121288   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 43/60
	I0626 20:43:25.122757   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 44/60
	I0626 20:43:26.124386   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 45/60
	I0626 20:43:27.125915   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 46/60
	I0626 20:43:28.127288   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 47/60
	I0626 20:43:29.128743   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 48/60
	I0626 20:43:30.130148   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 49/60
	I0626 20:43:31.132002   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 50/60
	I0626 20:43:32.133481   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 51/60
	I0626 20:43:33.135047   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 52/60
	I0626 20:43:34.136281   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 53/60
	I0626 20:43:35.137868   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 54/60
	I0626 20:43:36.139675   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 55/60
	I0626 20:43:37.140838   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 56/60
	I0626 20:43:38.142577   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 57/60
	I0626 20:43:39.143990   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 58/60
	I0626 20:43:40.145308   46779 main.go:141] libmachine: (embed-certs-299839) Waiting for machine to stop 59/60
	I0626 20:43:41.146245   46779 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:43:41.146285   46779 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:43:41.148110   46779 out.go:177] 
	W0626 20:43:41.149508   46779 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0626 20:43:41.149526   46779 out.go:239] * 
	* 
	W0626 20:43:41.151674   46779 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 20:43:41.153194   46779 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-299839 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839: exit status 3 (18.607035782s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:43:59.761663   47379 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0626 20:43:59.761683   47379 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-299839" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-473235 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-473235 --alsologtostderr -v=3: exit status 82 (2m1.325005218s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-473235"  ...
	* Stopping node "default-k8s-diff-port-473235"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:42:06.003806   46929 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:42:06.003954   46929 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:42:06.003965   46929 out.go:309] Setting ErrFile to fd 2...
	I0626 20:42:06.003972   46929 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:42:06.004103   46929 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:42:06.004382   46929 out.go:303] Setting JSON to false
	I0626 20:42:06.004496   46929 mustload.go:65] Loading cluster: default-k8s-diff-port-473235
	I0626 20:42:06.004830   46929 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:42:06.004927   46929 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:42:06.005113   46929 mustload.go:65] Loading cluster: default-k8s-diff-port-473235
	I0626 20:42:06.005244   46929 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:42:06.005280   46929 stop.go:39] StopHost: default-k8s-diff-port-473235
	I0626 20:42:06.005706   46929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:42:06.005772   46929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:42:06.019604   46929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36893
	I0626 20:42:06.020051   46929 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:42:06.020737   46929 main.go:141] libmachine: Using API Version  1
	I0626 20:42:06.020761   46929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:42:06.021107   46929 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:42:06.023344   46929 out.go:177] * Stopping node "default-k8s-diff-port-473235"  ...
	I0626 20:42:06.024934   46929 main.go:141] libmachine: Stopping "default-k8s-diff-port-473235"...
	I0626 20:42:06.024958   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:42:06.026299   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Stop
	I0626 20:42:06.029180   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 0/60
	I0626 20:42:07.030616   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 1/60
	I0626 20:42:08.032238   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 2/60
	I0626 20:42:09.033808   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 3/60
	I0626 20:42:10.035158   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 4/60
	I0626 20:42:11.037177   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 5/60
	I0626 20:42:12.038800   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 6/60
	I0626 20:42:13.040012   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 7/60
	I0626 20:42:14.041744   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 8/60
	I0626 20:42:15.043011   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 9/60
	I0626 20:42:16.045456   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 10/60
	I0626 20:42:17.046820   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 11/60
	I0626 20:42:18.048442   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 12/60
	I0626 20:42:19.049894   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 13/60
	I0626 20:42:20.051403   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 14/60
	I0626 20:42:21.053460   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 15/60
	I0626 20:42:22.054627   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 16/60
	I0626 20:42:23.055956   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 17/60
	I0626 20:42:24.057508   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 18/60
	I0626 20:42:25.059036   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 19/60
	I0626 20:42:26.061241   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 20/60
	I0626 20:42:27.062753   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 21/60
	I0626 20:42:28.064296   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 22/60
	I0626 20:42:29.065537   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 23/60
	I0626 20:42:30.067023   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 24/60
	I0626 20:42:31.069319   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 25/60
	I0626 20:42:32.070429   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 26/60
	I0626 20:42:33.071899   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 27/60
	I0626 20:42:34.073202   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 28/60
	I0626 20:42:35.074596   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 29/60
	I0626 20:42:36.076820   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 30/60
	I0626 20:42:37.078048   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 31/60
	I0626 20:42:38.079706   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 32/60
	I0626 20:42:39.081348   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 33/60
	I0626 20:42:40.082797   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 34/60
	I0626 20:42:41.084638   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 35/60
	I0626 20:42:42.085970   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 36/60
	I0626 20:42:43.087231   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 37/60
	I0626 20:42:44.088407   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 38/60
	I0626 20:42:45.089701   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 39/60
	I0626 20:42:46.091763   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 40/60
	I0626 20:42:47.092868   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 41/60
	I0626 20:42:48.094325   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 42/60
	I0626 20:42:49.095762   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 43/60
	I0626 20:42:50.097023   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 44/60
	I0626 20:42:51.098780   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 45/60
	I0626 20:42:52.099886   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 46/60
	I0626 20:42:53.100971   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 47/60
	I0626 20:42:54.102195   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 48/60
	I0626 20:42:55.103684   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 49/60
	I0626 20:42:56.105757   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 50/60
	I0626 20:42:57.107626   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 51/60
	I0626 20:42:58.108836   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 52/60
	I0626 20:42:59.109989   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 53/60
	I0626 20:43:00.111101   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 54/60
	I0626 20:43:01.112893   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 55/60
	I0626 20:43:02.114301   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 56/60
	I0626 20:43:03.115668   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 57/60
	I0626 20:43:04.116798   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 58/60
	I0626 20:43:05.117937   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 59/60
	I0626 20:43:06.119172   46929 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:43:06.119221   46929 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:43:06.119238   46929 retry.go:31] will retry after 1.044778088s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:43:07.164395   46929 stop.go:39] StopHost: default-k8s-diff-port-473235
	I0626 20:43:07.164781   46929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:43:07.164825   46929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:43:07.179481   46929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36273
	I0626 20:43:07.179934   46929 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:43:07.180476   46929 main.go:141] libmachine: Using API Version  1
	I0626 20:43:07.180547   46929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:43:07.180877   46929 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:43:07.182664   46929 out.go:177] * Stopping node "default-k8s-diff-port-473235"  ...
	I0626 20:43:07.183890   46929 main.go:141] libmachine: Stopping "default-k8s-diff-port-473235"...
	I0626 20:43:07.183904   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:43:07.185343   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Stop
	I0626 20:43:07.188592   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 0/60
	I0626 20:43:08.190024   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 1/60
	I0626 20:43:09.191473   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 2/60
	I0626 20:43:10.192815   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 3/60
	I0626 20:43:11.194375   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 4/60
	I0626 20:43:12.196361   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 5/60
	I0626 20:43:13.197661   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 6/60
	I0626 20:43:14.199809   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 7/60
	I0626 20:43:15.201303   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 8/60
	I0626 20:43:16.202590   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 9/60
	I0626 20:43:17.204624   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 10/60
	I0626 20:43:18.206176   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 11/60
	I0626 20:43:19.208129   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 12/60
	I0626 20:43:20.209867   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 13/60
	I0626 20:43:21.211206   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 14/60
	I0626 20:43:22.212961   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 15/60
	I0626 20:43:23.214520   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 16/60
	I0626 20:43:24.215837   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 17/60
	I0626 20:43:25.217278   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 18/60
	I0626 20:43:26.218535   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 19/60
	I0626 20:43:27.220237   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 20/60
	I0626 20:43:28.221889   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 21/60
	I0626 20:43:29.223449   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 22/60
	I0626 20:43:30.224850   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 23/60
	I0626 20:43:31.226148   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 24/60
	I0626 20:43:32.228054   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 25/60
	I0626 20:43:33.229787   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 26/60
	I0626 20:43:34.231153   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 27/60
	I0626 20:43:35.232642   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 28/60
	I0626 20:43:36.234242   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 29/60
	I0626 20:43:37.235536   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 30/60
	I0626 20:43:38.236888   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 31/60
	I0626 20:43:39.238435   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 32/60
	I0626 20:43:40.239795   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 33/60
	I0626 20:43:41.240782   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 34/60
	I0626 20:43:42.242506   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 35/60
	I0626 20:43:43.244071   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 36/60
	I0626 20:43:44.245235   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 37/60
	I0626 20:43:45.246861   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 38/60
	I0626 20:43:46.248723   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 39/60
	I0626 20:43:47.250463   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 40/60
	I0626 20:43:48.251879   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 41/60
	I0626 20:43:49.253321   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 42/60
	I0626 20:43:50.254656   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 43/60
	I0626 20:43:51.255938   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 44/60
	I0626 20:43:52.257740   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 45/60
	I0626 20:43:53.259599   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 46/60
	I0626 20:43:54.261133   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 47/60
	I0626 20:43:55.262760   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 48/60
	I0626 20:43:56.263917   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 49/60
	I0626 20:43:57.265688   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 50/60
	I0626 20:43:58.267050   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 51/60
	I0626 20:43:59.268370   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 52/60
	I0626 20:44:00.269726   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 53/60
	I0626 20:44:01.271992   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 54/60
	I0626 20:44:02.273796   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 55/60
	I0626 20:44:03.275899   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 56/60
	I0626 20:44:04.277350   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 57/60
	I0626 20:44:05.278829   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 58/60
	I0626 20:44:06.280105   46929 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for machine to stop 59/60
	I0626 20:44:07.281043   46929 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0626 20:44:07.281084   46929 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0626 20:44:07.283129   46929 out.go:177] 
	W0626 20:44:07.284627   46929 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0626 20:44:07.284644   46929 out.go:239] * 
	* 
	W0626 20:44:07.286926   46929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 20:44:07.288600   46929 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-473235 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235: exit status 3 (18.583487462s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:44:25.873720   47576 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host
	E0626 20:44:25.873743   47576 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-473235" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450: exit status 3 (3.167734324s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:43:20.433651   47213 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host
	E0626 20:43:20.433670   47213 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-934450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-934450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 10 (3.08040961s)

                                                
                                                
-- stdout --
	* dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-934450 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450: exit status 3 (3.063697588s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:43:26.577745   47279 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host
	E0626 20:43:26.577785   47279 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-934450" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839
E0626 20:44:00.823793   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839: exit status 3 (3.16813101s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:44:02.929771   47471 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0626 20:44:02.929810   47471 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-299839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-299839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 10 (3.081250604s)

                                                
                                                
-- stdout --
	* dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-299839 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839: exit status 3 (3.062218819s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:44:09.073758   47536 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0626 20:44:09.073784   47536 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-299839" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235: exit status 3 (3.167557652s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:44:29.041666   47673 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host
	E0626 20:44:29.041690   47673 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-473235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-473235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 10 (3.081134131s)

                                                
                                                
-- stdout --
	* dashboard is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-473235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235: exit status 3 (3.062643668s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 20:44:35.185832   47749 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host
	E0626 20:44:35.185865   47749 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.238:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-473235" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0626 20:53:30.705347   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:54:00.824766   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:01:53.482313862 +0000 UTC m=+5177.982341688
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-473235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-473235 logs -n 25: (1.591084543s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490377        | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-123924                              | stopped-upgrade-123924       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603225 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | disable-driver-mounts-603225                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934450             | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC | 26 Jun 23 20:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490377             | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 20:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 20:44:35.222921   47779 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:44:35.223059   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223070   47779 out.go:309] Setting ErrFile to fd 2...
	I0626 20:44:35.223074   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223199   47779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:44:35.223797   47779 out.go:303] Setting JSON to false
	I0626 20:44:35.224674   47779 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5222,"bootTime":1687807053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:44:35.224734   47779 start.go:137] virtualization: kvm guest
	I0626 20:44:35.226901   47779 out.go:177] * [default-k8s-diff-port-473235] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:44:35.228842   47779 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:44:35.228804   47779 notify.go:220] Checking for updates...
	I0626 20:44:35.230224   47779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:44:35.231788   47779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:44:35.233239   47779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:44:35.234554   47779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:44:35.236823   47779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:44:35.238432   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:44:35.238825   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.238878   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.253669   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0626 20:44:35.254014   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.254589   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.254610   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.254907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.255090   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.255322   47779 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:44:35.255597   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.255627   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.269620   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0626 20:44:35.270027   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.270571   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.270599   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.270857   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.271037   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.302607   47779 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:44:35.303877   47779 start.go:297] selected driver: kvm2
	I0626 20:44:35.303889   47779 start.go:954] validating driver "kvm2" against &{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.303997   47779 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:44:35.304600   47779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.304681   47779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:44:35.319036   47779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:44:35.319459   47779 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 20:44:35.319499   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:44:35.319516   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:44:35.319532   47779 start_flags.go:319] config:
	{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-47323
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.319725   47779 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.321690   47779 out.go:177] * Starting control plane node default-k8s-diff-port-473235 in cluster default-k8s-diff-port-473235
	I0626 20:44:33.713644   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:35.323076   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:44:35.323119   47779 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 20:44:35.323145   47779 cache.go:57] Caching tarball of preloaded images
	I0626 20:44:35.323245   47779 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:44:35.323260   47779 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:44:35.323385   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:44:35.323607   47779 start.go:365] acquiring machines lock for default-k8s-diff-port-473235: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:44:39.793629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:42.865602   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:48.945651   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:52.017646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:58.097650   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:01.169629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:07.249647   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:10.321634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:16.401660   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:19.473641   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:25.553634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:28.625721   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:34.705617   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:37.777753   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:43.857659   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:46.929661   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:53.009637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:56.081646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:02.161637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:05.233633   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:11.313640   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:14.317303   47309 start.go:369] acquired machines lock for "no-preload-934450" in 2m47.59820508s
	I0626 20:46:14.317355   47309 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:14.317388   47309 fix.go:54] fixHost starting: 
	I0626 20:46:14.317703   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:14.317733   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:14.331991   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0626 20:46:14.332358   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:14.332862   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:46:14.332888   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:14.333180   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:14.333368   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:14.333556   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:46:14.334930   47309 fix.go:102] recreateIfNeeded on no-preload-934450: state=Stopped err=<nil>
	I0626 20:46:14.334954   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	W0626 20:46:14.335122   47309 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:14.336692   47309 out.go:177] * Restarting existing kvm2 VM for "no-preload-934450" ...
	I0626 20:46:14.338056   47309 main.go:141] libmachine: (no-preload-934450) Calling .Start
	I0626 20:46:14.338201   47309 main.go:141] libmachine: (no-preload-934450) Ensuring networks are active...
	I0626 20:46:14.339255   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network default is active
	I0626 20:46:14.339575   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network mk-no-preload-934450 is active
	I0626 20:46:14.339980   47309 main.go:141] libmachine: (no-preload-934450) Getting domain xml...
	I0626 20:46:14.340638   47309 main.go:141] libmachine: (no-preload-934450) Creating domain...
	I0626 20:46:15.550725   47309 main.go:141] libmachine: (no-preload-934450) Waiting to get IP...
	I0626 20:46:15.551641   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.552053   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.552125   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.552057   48070 retry.go:31] will retry after 285.629833ms: waiting for machine to come up
	I0626 20:46:15.839584   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.839950   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.839976   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.839920   48070 retry.go:31] will retry after 318.234269ms: waiting for machine to come up
	I0626 20:46:16.159361   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.159793   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.159823   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.159752   48070 retry.go:31] will retry after 486.280811ms: waiting for machine to come up
	I0626 20:46:14.315357   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:14.315401   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:46:14.317194   46683 machine.go:91] provisioned docker machine in 4m37.381545898s
	I0626 20:46:14.317230   46683 fix.go:56] fixHost completed within 4m37.403983922s
	I0626 20:46:14.317236   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 4m37.404002624s
	W0626 20:46:14.317252   46683 start.go:672] error starting host: provision: host is not running
	W0626 20:46:14.317326   46683 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0626 20:46:14.317333   46683 start.go:687] Will try again in 5 seconds ...
	I0626 20:46:16.647364   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.647777   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.647803   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.647721   48070 retry.go:31] will retry after 396.658606ms: waiting for machine to come up
	I0626 20:46:17.046604   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.047131   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.047156   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.047033   48070 retry.go:31] will retry after 741.382401ms: waiting for machine to come up
	I0626 20:46:17.789616   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.790035   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.790068   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.790014   48070 retry.go:31] will retry after 636.769895ms: waiting for machine to come up
	I0626 20:46:18.427899   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:18.428300   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:18.428326   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:18.428272   48070 retry.go:31] will retry after 869.736092ms: waiting for machine to come up
	I0626 20:46:19.299429   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:19.299742   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:19.299765   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:19.299717   48070 retry.go:31] will retry after 1.261709663s: waiting for machine to come up
	I0626 20:46:20.563421   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:20.563778   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:20.563807   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:20.563751   48070 retry.go:31] will retry after 1.280588584s: waiting for machine to come up
	I0626 20:46:19.318965   46683 start.go:365] acquiring machines lock for old-k8s-version-490377: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:46:21.846094   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:21.846530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:21.846557   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:21.846475   48070 retry.go:31] will retry after 1.542478163s: waiting for machine to come up
	I0626 20:46:23.391088   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:23.391530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:23.391559   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:23.391474   48070 retry.go:31] will retry after 2.115450652s: waiting for machine to come up
	I0626 20:46:25.508447   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:25.508882   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:25.508915   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:25.508826   48070 retry.go:31] will retry after 3.403199971s: waiting for machine to come up
	I0626 20:46:28.916347   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:28.916756   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:28.916782   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:28.916706   48070 retry.go:31] will retry after 3.011345508s: waiting for machine to come up
	I0626 20:46:33.094365   47605 start.go:369] acquired machines lock for "embed-certs-299839" in 2m23.878841424s
	I0626 20:46:33.094419   47605 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:33.094440   47605 fix.go:54] fixHost starting: 
	I0626 20:46:33.094827   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:33.094856   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:33.114045   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0626 20:46:33.114400   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:33.114927   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:46:33.114949   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:33.115244   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:33.115434   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:33.115573   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:46:33.116751   47605 fix.go:102] recreateIfNeeded on embed-certs-299839: state=Stopped err=<nil>
	I0626 20:46:33.116783   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	W0626 20:46:33.116944   47605 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:33.119904   47605 out.go:177] * Restarting existing kvm2 VM for "embed-certs-299839" ...
	I0626 20:46:33.121277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Start
	I0626 20:46:33.121442   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring networks are active...
	I0626 20:46:33.122062   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network default is active
	I0626 20:46:33.122397   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network mk-embed-certs-299839 is active
	I0626 20:46:33.122783   47605 main.go:141] libmachine: (embed-certs-299839) Getting domain xml...
	I0626 20:46:33.123400   47605 main.go:141] libmachine: (embed-certs-299839) Creating domain...
	I0626 20:46:31.930997   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931492   47309 main.go:141] libmachine: (no-preload-934450) Found IP for machine: 192.168.50.38
	I0626 20:46:31.931507   47309 main.go:141] libmachine: (no-preload-934450) Reserving static IP address...
	I0626 20:46:31.931524   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has current primary IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931877   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.931901   47309 main.go:141] libmachine: (no-preload-934450) DBG | skip adding static IP to network mk-no-preload-934450 - found existing host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"}
	I0626 20:46:31.931916   47309 main.go:141] libmachine: (no-preload-934450) Reserved static IP address: 192.168.50.38
	I0626 20:46:31.931928   47309 main.go:141] libmachine: (no-preload-934450) DBG | Getting to WaitForSSH function...
	I0626 20:46:31.931939   47309 main.go:141] libmachine: (no-preload-934450) Waiting for SSH to be available...
	I0626 20:46:31.934393   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934786   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.934814   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934954   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH client type: external
	I0626 20:46:31.934971   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa (-rw-------)
	I0626 20:46:31.935060   47309 main.go:141] libmachine: (no-preload-934450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:31.935091   47309 main.go:141] libmachine: (no-preload-934450) DBG | About to run SSH command:
	I0626 20:46:31.935112   47309 main.go:141] libmachine: (no-preload-934450) DBG | exit 0
	I0626 20:46:32.021036   47309 main.go:141] libmachine: (no-preload-934450) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:32.021357   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetConfigRaw
	I0626 20:46:32.022056   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.024943   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025390   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.025426   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025663   47309 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/config.json ...
	I0626 20:46:32.025851   47309 machine.go:88] provisioning docker machine ...
	I0626 20:46:32.025868   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.026092   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026257   47309 buildroot.go:166] provisioning hostname "no-preload-934450"
	I0626 20:46:32.026280   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026450   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.028213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028583   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.028618   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028699   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.028869   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029019   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029154   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.029415   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.029867   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.029887   47309 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934450 && echo "no-preload-934450" | sudo tee /etc/hostname
	I0626 20:46:32.150597   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934450
	
	I0626 20:46:32.150629   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.153096   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153441   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.153486   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153576   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.153773   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.153984   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.154125   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.154288   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.154697   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.154723   47309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:32.270792   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:32.270827   47309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:32.270890   47309 buildroot.go:174] setting up certificates
	I0626 20:46:32.270902   47309 provision.go:83] configureAuth start
	I0626 20:46:32.270922   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.271206   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.273824   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274189   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.274213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274310   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.276495   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.276896   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.276927   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.277062   47309 provision.go:138] copyHostCerts
	I0626 20:46:32.277118   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:32.277126   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:32.277188   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:32.277271   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:32.277278   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:32.277300   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:32.277351   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:32.277357   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:32.277393   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:32.277450   47309 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.no-preload-934450 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube no-preload-934450]
	I0626 20:46:32.417361   47309 provision.go:172] copyRemoteCerts
	I0626 20:46:32.417430   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:32.417452   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.419946   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420300   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.420331   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420501   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.420703   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.420864   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.421017   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.501807   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:46:32.524284   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:32.546766   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0626 20:46:32.569677   47309 provision.go:86] duration metric: configureAuth took 298.742863ms
	I0626 20:46:32.569711   47309 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:32.569925   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:32.570026   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.572516   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.572864   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.572901   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.573011   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.573178   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573350   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573492   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.573646   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.574084   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.574102   47309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:32.859482   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:32.859509   47309 machine.go:91] provisioned docker machine in 833.647496ms
	I0626 20:46:32.859519   47309 start.go:300] post-start starting for "no-preload-934450" (driver="kvm2")
	I0626 20:46:32.859527   47309 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:32.859543   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.859892   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:32.859942   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.862731   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863099   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.863131   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863250   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.863434   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.863570   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.863698   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.946748   47309 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:32.951257   47309 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:32.951278   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:32.951351   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:32.951436   47309 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:32.951516   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:32.959676   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:32.982687   47309 start.go:303] post-start completed in 123.154915ms
	I0626 20:46:32.982714   47309 fix.go:56] fixHost completed within 18.665325334s
	I0626 20:46:32.982763   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.985318   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985693   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.985725   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985868   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.986072   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986226   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986388   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.986547   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.986951   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.986968   47309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:33.094211   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812393.043726278
	
	I0626 20:46:33.094239   47309 fix.go:206] guest clock: 1687812393.043726278
	I0626 20:46:33.094248   47309 fix.go:219] Guest: 2023-06-26 20:46:33.043726278 +0000 UTC Remote: 2023-06-26 20:46:32.98271893 +0000 UTC m=+186.399054274 (delta=61.007348ms)
	I0626 20:46:33.094272   47309 fix.go:190] guest clock delta is within tolerance: 61.007348ms
	I0626 20:46:33.094277   47309 start.go:83] releasing machines lock for "no-preload-934450", held for 18.776943332s
	I0626 20:46:33.094309   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.094577   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:33.097365   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097744   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.097775   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097979   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098382   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098586   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098661   47309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:33.098712   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.098797   47309 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:33.098816   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.101252   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101554   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101580   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101599   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101719   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.101873   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.101951   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101981   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.102007   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102160   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.102182   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.102316   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.102443   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102551   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.210044   47309 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:33.215912   47309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:33.359955   47309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:33.366146   47309 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:33.366217   47309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:33.380504   47309 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:33.380526   47309 start.go:466] detecting cgroup driver to use...
	I0626 20:46:33.380579   47309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:33.393306   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:33.404983   47309 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:33.405038   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:33.418216   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:33.432337   47309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:33.531250   47309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:33.645556   47309 docker.go:212] disabling docker service ...
	I0626 20:46:33.645633   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:33.659515   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:33.671856   47309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:33.774921   47309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:33.883215   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:33.898847   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:33.917506   47309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:33.917580   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.928683   47309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:33.928743   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.939242   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.949833   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.960544   47309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:33.970988   47309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:33.979977   47309 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:33.980018   47309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:33.992692   47309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:34.001898   47309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:34.099514   47309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:34.265988   47309 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:34.266060   47309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:34.273678   47309 start.go:534] Will wait 60s for crictl version
	I0626 20:46:34.273739   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.277401   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:34.312548   47309 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:34.312630   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.360715   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.413882   47309 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:34.415181   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:34.417841   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418166   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:34.418189   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418410   47309 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:34.422651   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:34.434668   47309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:34.434717   47309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:34.465589   47309 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:34.465614   47309 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:46:34.465690   47309 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.465708   47309 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.465738   47309 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.465754   47309 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.465788   47309 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.465828   47309 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.465693   47309 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.465936   47309 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.467120   47309 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.467219   47309 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.467247   47309 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.467295   47309 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.467306   47309 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.467250   47309 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.636874   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.655059   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.683826   47309 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0626 20:46:34.683861   47309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.683928   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.702952   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.703028   47309 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0626 20:46:34.703071   47309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.703103   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.741790   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.741897   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0626 20:46:34.742006   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.746779   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.749151   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0626 20:46:34.759216   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.760925   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.763727   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.802768   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0626 20:46:34.802855   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0626 20:46:34.802879   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802936   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802879   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:34.875629   47309 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0626 20:46:34.875683   47309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.875741   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976009   47309 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0626 20:46:34.976048   47309 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.976082   47309 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0626 20:46:34.976100   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976116   47309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.976117   47309 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0626 20:46:34.976143   47309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.976156   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976179   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:35.433285   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.379704   47605 main.go:141] libmachine: (embed-certs-299839) Waiting to get IP...
	I0626 20:46:34.380770   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.381274   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.381362   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.381264   48187 retry.go:31] will retry after 291.849421ms: waiting for machine to come up
	I0626 20:46:34.674760   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.675247   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.675276   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.675192   48187 retry.go:31] will retry after 276.057593ms: waiting for machine to come up
	I0626 20:46:34.952573   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.953045   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.953077   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.953003   48187 retry.go:31] will retry after 360.478931ms: waiting for machine to come up
	I0626 20:46:35.315537   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.316036   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.316057   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.315988   48187 retry.go:31] will retry after 582.62072ms: waiting for machine to come up
	I0626 20:46:35.899816   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.900171   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.900232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.900154   48187 retry.go:31] will retry after 502.843212ms: waiting for machine to come up
	I0626 20:46:36.404792   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:36.405188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:36.405222   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:36.405134   48187 retry.go:31] will retry after 594.811848ms: waiting for machine to come up
	I0626 20:46:37.001827   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:37.002238   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:37.002264   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:37.002182   48187 retry.go:31] will retry after 1.067889284s: waiting for machine to come up
	I0626 20:46:38.071685   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:38.072135   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:38.072158   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:38.072094   48187 retry.go:31] will retry after 1.189834776s: waiting for machine to come up
	I0626 20:46:36.844137   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (2.041169028s)
	I0626 20:46:36.844171   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0626 20:46:36.844205   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.041210189s)
	I0626 20:46:36.844232   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0626 20:46:36.844245   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844257   47309 ssh_runner.go:235] Completed: which crictl: (1.868146562s)
	I0626 20:46:36.844293   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844300   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:36.844234   47309 ssh_runner.go:235] Completed: which crictl: (1.968483663s)
	I0626 20:46:36.844349   47309 ssh_runner.go:235] Completed: which crictl: (1.868154335s)
	I0626 20:46:36.844364   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:36.844380   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:36.844405   47309 ssh_runner.go:235] Completed: which crictl: (1.868235538s)
	I0626 20:46:36.844428   47309 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.411115015s)
	I0626 20:46:36.844448   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:36.844455   47309 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0626 20:46:36.844488   47309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:36.844513   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:39.895683   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.051359255s)
	I0626 20:46:39.895720   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0626 20:46:39.895808   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0: (3.051484848s)
	I0626 20:46:39.895824   47309 ssh_runner.go:235] Completed: which crictl: (3.051289954s)
	I0626 20:46:39.895855   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0626 20:46:39.895873   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1: (3.051494383s)
	I0626 20:46:39.895888   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:39.895908   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0626 20:46:39.895950   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:39.895909   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3: (3.051516174s)
	I0626 20:46:39.895990   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:39.896000   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3: (3.051535924s)
	I0626 20:46:39.896033   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0626 20:46:39.896034   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0626 20:46:39.896089   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.896102   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901778   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0626 20:46:39.901797   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901830   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.911439   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0626 20:46:39.911477   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0626 20:46:39.911517   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0626 20:46:39.943818   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0626 20:46:39.943947   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:41.278134   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.334156546s)
	I0626 20:46:41.278173   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0626 20:46:41.278135   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.376281957s)
	I0626 20:46:41.278187   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0626 20:46:41.278207   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:41.278256   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.263991   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:39.264402   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:39.264433   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:39.264371   48187 retry.go:31] will retry after 1.805262511s: waiting for machine to come up
	I0626 20:46:41.071232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:41.071707   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:41.071731   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:41.071662   48187 retry.go:31] will retry after 1.945519102s: waiting for machine to come up
	I0626 20:46:43.018581   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:43.019039   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:43.019075   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:43.018983   48187 retry.go:31] will retry after 2.83662877s: waiting for machine to come up
	I0626 20:46:43.745408   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.467115523s)
	I0626 20:46:43.745443   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0626 20:46:43.745479   47309 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:43.745551   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:45.011214   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.26563338s)
	I0626 20:46:45.011266   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0626 20:46:45.011296   47309 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.011349   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.858520   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:45.858992   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:45.859026   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:45.858941   48187 retry.go:31] will retry after 2.332305212s: waiting for machine to come up
	I0626 20:46:48.193085   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:48.193594   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:48.193625   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:48.193543   48187 retry.go:31] will retry after 2.846333425s: waiting for machine to come up
	I0626 20:46:52.634333   47779 start.go:369] acquired machines lock for "default-k8s-diff-port-473235" in 2m17.310683576s
	I0626 20:46:52.634385   47779 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:52.634413   47779 fix.go:54] fixHost starting: 
	I0626 20:46:52.634850   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:52.634890   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:52.654153   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0626 20:46:52.654638   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:52.655306   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:46:52.655337   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:52.655747   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:52.655952   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:46:52.656158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:46:52.657823   47779 fix.go:102] recreateIfNeeded on default-k8s-diff-port-473235: state=Stopped err=<nil>
	I0626 20:46:52.657850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	W0626 20:46:52.657997   47779 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:52.659722   47779 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-473235" ...
	I0626 20:46:51.043526   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044005   47605 main.go:141] libmachine: (embed-certs-299839) Found IP for machine: 192.168.39.51
	I0626 20:46:51.044034   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has current primary IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044045   47605 main.go:141] libmachine: (embed-certs-299839) Reserving static IP address...
	I0626 20:46:51.044351   47605 main.go:141] libmachine: (embed-certs-299839) Reserved static IP address: 192.168.39.51
	I0626 20:46:51.044368   47605 main.go:141] libmachine: (embed-certs-299839) Waiting for SSH to be available...
	I0626 20:46:51.044405   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.044439   47605 main.go:141] libmachine: (embed-certs-299839) DBG | skip adding static IP to network mk-embed-certs-299839 - found existing host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"}
	I0626 20:46:51.044456   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Getting to WaitForSSH function...
	I0626 20:46:51.046694   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047088   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.047121   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047312   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH client type: external
	I0626 20:46:51.047348   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa (-rw-------)
	I0626 20:46:51.047392   47605 main.go:141] libmachine: (embed-certs-299839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:51.047414   47605 main.go:141] libmachine: (embed-certs-299839) DBG | About to run SSH command:
	I0626 20:46:51.047432   47605 main.go:141] libmachine: (embed-certs-299839) DBG | exit 0
	I0626 20:46:51.137058   47605 main.go:141] libmachine: (embed-certs-299839) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:51.137408   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetConfigRaw
	I0626 20:46:51.197444   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.199920   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200306   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.200339   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200574   47605 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/config.json ...
	I0626 20:46:51.267260   47605 machine.go:88] provisioning docker machine ...
	I0626 20:46:51.267304   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:51.267709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.267921   47605 buildroot.go:166] provisioning hostname "embed-certs-299839"
	I0626 20:46:51.267943   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.268086   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.270429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270762   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.270790   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270886   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.271060   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271200   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271308   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.271475   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.271933   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.271950   47605 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-299839 && echo "embed-certs-299839" | sudo tee /etc/hostname
	I0626 20:46:51.403584   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-299839
	
	I0626 20:46:51.403622   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.406552   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.406876   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.406904   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.407053   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.407335   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407530   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407716   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.407883   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.408280   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.408300   47605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-299839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-299839/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-299839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:51.534666   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:51.534702   47605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:51.534745   47605 buildroot.go:174] setting up certificates
	I0626 20:46:51.534753   47605 provision.go:83] configureAuth start
	I0626 20:46:51.534766   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.535047   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.537753   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538113   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.538141   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.540471   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.540890   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.540922   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.541015   47605 provision.go:138] copyHostCerts
	I0626 20:46:51.541089   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:51.541099   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:51.541155   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:51.541237   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:51.541246   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:51.541277   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:51.541333   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:51.541339   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:51.541357   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:51.541434   47605 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-299839 san=[192.168.39.51 192.168.39.51 localhost 127.0.0.1 minikube embed-certs-299839]
	I0626 20:46:51.873317   47605 provision.go:172] copyRemoteCerts
	I0626 20:46:51.873396   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:51.873427   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.876293   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876659   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.876696   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876889   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.877100   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.877262   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.877430   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:51.970189   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:46:51.993067   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:52.015607   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0626 20:46:52.037359   47605 provision.go:86] duration metric: configureAuth took 502.581033ms
	I0626 20:46:52.037401   47605 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:52.037623   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:52.037714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.040949   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.041486   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041642   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.041859   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042061   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042235   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.042398   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.042916   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.042936   47605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:52.366045   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:52.366072   47605 machine.go:91] provisioned docker machine in 1.098783864s
	I0626 20:46:52.366083   47605 start.go:300] post-start starting for "embed-certs-299839" (driver="kvm2")
	I0626 20:46:52.366112   47605 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:52.366134   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.366443   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:52.366472   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.369138   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369570   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.369630   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369781   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.369957   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.370131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.370278   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.467055   47605 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:52.471203   47605 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:52.471222   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:52.471288   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:52.471394   47605 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:52.471510   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:52.484668   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.510268   47605 start.go:303] post-start completed in 144.162745ms
	I0626 20:46:52.510292   47605 fix.go:56] fixHost completed within 19.415851972s
	I0626 20:46:52.510315   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.513188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513629   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.513662   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513848   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.514062   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514228   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514415   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.514569   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.514968   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.514983   47605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:52.634177   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812412.582368193
	
	I0626 20:46:52.634199   47605 fix.go:206] guest clock: 1687812412.582368193
	I0626 20:46:52.634209   47605 fix.go:219] Guest: 2023-06-26 20:46:52.582368193 +0000 UTC Remote: 2023-06-26 20:46:52.510296584 +0000 UTC m=+163.430129249 (delta=72.071609ms)
	I0626 20:46:52.634237   47605 fix.go:190] guest clock delta is within tolerance: 72.071609ms
	I0626 20:46:52.634242   47605 start.go:83] releasing machines lock for "embed-certs-299839", held for 19.539848437s
	I0626 20:46:52.634277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.634623   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:52.637732   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638182   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.638220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638476   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639040   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639223   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639307   47605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:52.639346   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.639490   47605 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:52.639517   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.642288   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642923   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642968   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643016   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643351   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643492   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643528   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643564   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643763   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.643778   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643973   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643991   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.644109   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.644240   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.761230   47605 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:52.766865   47605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:52.919883   47605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:52.927218   47605 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:52.927290   47605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:52.948916   47605 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:52.948983   47605 start.go:466] detecting cgroup driver to use...
	I0626 20:46:52.949043   47605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:52.968673   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:52.982360   47605 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:52.982416   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:52.996984   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:53.015021   47605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:53.116692   47605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:53.251017   47605 docker.go:212] disabling docker service ...
	I0626 20:46:53.251096   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:53.268097   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:53.282223   47605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:53.412477   47605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:53.528110   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:53.541392   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:53.558736   47605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:53.558809   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.568482   47605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:53.568553   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.578178   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.587728   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.597231   47605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:53.606954   47605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:53.615250   47605 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:53.615308   47605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:53.628161   47605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:53.636477   47605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:53.755919   47605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:53.928744   47605 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:53.928823   47605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:53.934088   47605 start.go:534] Will wait 60s for crictl version
	I0626 20:46:53.934152   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:46:53.939345   47605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:53.971679   47605 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:53.971781   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.013494   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.062724   47605 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:54.064536   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:54.067854   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:54.068254   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068535   47605 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:54.072971   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:54.085981   47605 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:54.086048   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:52.661170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Start
	I0626 20:46:52.661331   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring networks are active...
	I0626 20:46:52.662042   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network default is active
	I0626 20:46:52.662444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network mk-default-k8s-diff-port-473235 is active
	I0626 20:46:52.663218   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Getting domain xml...
	I0626 20:46:52.663876   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Creating domain...
	I0626 20:46:53.987148   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting to get IP...
	I0626 20:46:53.988282   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988739   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988832   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:53.988735   48355 retry.go:31] will retry after 271.192351ms: waiting for machine to come up
	I0626 20:46:54.261343   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261825   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261857   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.261773   48355 retry.go:31] will retry after 362.262293ms: waiting for machine to come up
	I0626 20:46:54.625453   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625951   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625978   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.625859   48355 retry.go:31] will retry after 311.337455ms: waiting for machine to come up
	I0626 20:46:54.938519   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939023   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939053   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.938972   48355 retry.go:31] will retry after 446.154442ms: waiting for machine to come up
	I0626 20:46:52.039929   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.0285527s)
	I0626 20:46:52.039951   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0626 20:46:52.039974   47309 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.040015   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.786422   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0626 20:46:52.786468   47309 cache_images.go:123] Successfully loaded all cached images
	I0626 20:46:52.786474   47309 cache_images.go:92] LoadImages completed in 18.320847233s
	I0626 20:46:52.786562   47309 ssh_runner.go:195] Run: crio config
	I0626 20:46:52.857805   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:46:52.857833   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:52.857849   47309 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:52.857871   47309 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934450 NodeName:no-preload-934450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:52.858035   47309 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934450"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:52.858115   47309 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:52.858172   47309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:52.867179   47309 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:52.867253   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:52.875412   47309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 20:46:52.891376   47309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:52.906859   47309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0626 20:46:52.924927   47309 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:52.929059   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:52.942789   47309 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450 for IP: 192.168.50.38
	I0626 20:46:52.942825   47309 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:52.943011   47309 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:52.943059   47309 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:52.943138   47309 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.key
	I0626 20:46:52.943195   47309 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key.01da567d
	I0626 20:46:52.943236   47309 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key
	I0626 20:46:52.943341   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:52.943376   47309 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:52.943396   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:52.943435   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:52.943472   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:52.943509   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:52.943551   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.944147   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:52.971630   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:52.997892   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:53.024951   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 20:46:53.048462   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:53.075077   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:53.100318   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:53.129545   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:53.162187   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:53.191304   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:53.216166   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:53.240182   47309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:53.256447   47309 ssh_runner.go:195] Run: openssl version
	I0626 20:46:53.262053   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:53.272163   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277028   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277084   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.282611   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:53.296039   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:53.306923   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312778   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312825   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.320244   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:53.334066   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:53.347662   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353665   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353725   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.361150   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:53.374846   47309 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:53.380462   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:53.387949   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:53.393690   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:53.399208   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:53.405073   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:53.411265   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:53.417798   47309 kubeadm.go:404] StartCluster: {Name:no-preload-934450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiN
odeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:53.417916   47309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:53.417950   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:53.451231   47309 cri.go:89] found id: ""
	I0626 20:46:53.451307   47309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:53.460716   47309 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:53.460737   47309 kubeadm.go:636] restartCluster start
	I0626 20:46:53.460790   47309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:53.470518   47309 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.471961   47309 kubeconfig.go:92] found "no-preload-934450" server: "https://192.168.50.38:8443"
	I0626 20:46:53.475433   47309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:53.484054   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.484108   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:53.497348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.998070   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.998129   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.010119   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.498134   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.498223   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.512223   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.997432   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.997520   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.015317   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.497435   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.497516   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.512591   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.998180   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.998251   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.013135   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:56.497483   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.497573   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.512714   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.116295   47605 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:54.116360   47605 ssh_runner.go:195] Run: which lz4
	I0626 20:46:54.120344   47605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:46:54.124462   47605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:46:54.124490   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:46:55.959041   47605 crio.go:444] Took 1.838722 seconds to copy over tarball
	I0626 20:46:55.959115   47605 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:46:59.019532   47605 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060382374s)
	I0626 20:46:59.019555   47605 crio.go:451] Took 3.060486 seconds to extract the tarball
	I0626 20:46:59.019562   47605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:46:59.058687   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:59.102812   47605 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:46:59.102833   47605 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:46:59.102896   47605 ssh_runner.go:195] Run: crio config
	I0626 20:46:55.386479   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.386986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.387014   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:55.386901   48355 retry.go:31] will retry after 710.798834ms: waiting for machine to come up
	I0626 20:46:56.099580   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100079   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100112   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:56.100023   48355 retry.go:31] will retry after 921.187154ms: waiting for machine to come up
	I0626 20:46:57.022481   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022914   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.022859   48355 retry.go:31] will retry after 914.232442ms: waiting for machine to come up
	I0626 20:46:57.938375   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938823   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938845   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.938807   48355 retry.go:31] will retry after 1.411011331s: waiting for machine to come up
	I0626 20:46:59.351697   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352133   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352169   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:59.352076   48355 retry.go:31] will retry after 1.830031795s: waiting for machine to come up
	I0626 20:46:56.997450   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.997518   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.009310   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.497847   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.497929   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.513061   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.997474   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.997553   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.012610   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.498200   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.498274   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.513410   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.997938   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.998022   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.013357   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.497503   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.497581   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.514354   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.997445   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.997531   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.008894   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.497471   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.497555   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.508635   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.998326   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.998429   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.009836   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.498479   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.498593   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.510348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.159206   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:46:59.159236   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:59.159252   47605 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:59.159286   47605 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-299839 NodeName:embed-certs-299839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:59.159423   47605 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-299839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:59.159484   47605 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-299839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:59.159540   47605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:59.168802   47605 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:59.168882   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:59.177994   47605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0626 20:46:59.196041   47605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:59.214092   47605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0626 20:46:59.235187   47605 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:59.239440   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:59.251723   47605 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839 for IP: 192.168.39.51
	I0626 20:46:59.251772   47605 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:59.251943   47605 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:59.252017   47605 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:59.252134   47605 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/client.key
	I0626 20:46:59.252381   47605 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key.be9c3c95
	I0626 20:46:59.252482   47605 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key
	I0626 20:46:59.252626   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:59.252667   47605 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:59.252682   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:59.252718   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:59.252748   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:59.252805   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:59.252868   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:59.253555   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:59.280222   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:59.306244   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:59.331876   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:46:59.358710   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:59.385239   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:59.408963   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:59.433684   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:59.457235   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:59.480565   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:59.507918   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:59.532762   47605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:59.551283   47605 ssh_runner.go:195] Run: openssl version
	I0626 20:46:59.557079   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:59.568335   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573129   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573187   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.579116   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:59.589952   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:59.600935   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605668   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605735   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.611234   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:59.622615   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:59.633737   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638884   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638962   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.644559   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:59.655653   47605 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:59.660632   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:59.666672   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:59.672628   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:59.679194   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:59.685197   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:59.691190   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:59.697063   47605 kubeadm.go:404] StartCluster: {Name:embed-certs-299839 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:59.697146   47605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:59.697191   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:59.731197   47605 cri.go:89] found id: ""
	I0626 20:46:59.731256   47605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:59.741949   47605 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:59.741968   47605 kubeadm.go:636] restartCluster start
	I0626 20:46:59.742023   47605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:59.751837   47605 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.753347   47605 kubeconfig.go:92] found "embed-certs-299839" server: "https://192.168.39.51:8443"
	I0626 20:46:59.756955   47605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:59.766951   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.767023   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.779343   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.280064   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.280149   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.293730   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.780264   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.780347   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.793352   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.279827   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.279911   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.292843   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.779409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.779513   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.793293   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.279814   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.279902   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.296345   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.779892   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.779980   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.796346   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.280342   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.280417   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.292883   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.780156   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.780232   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.792667   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.184295   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184668   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184694   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:01.184605   48355 retry.go:31] will retry after 2.248796967s: waiting for machine to come up
	I0626 20:47:03.435559   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436054   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436086   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:03.435982   48355 retry.go:31] will retry after 2.012102985s: waiting for machine to come up
	I0626 20:47:01.998275   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.998353   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.014217   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.497731   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.497824   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.509505   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.998119   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.998202   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.009348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.485111   47309 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:03.485154   47309 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:03.485167   47309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:03.485216   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:03.516791   47309 cri.go:89] found id: ""
	I0626 20:47:03.516868   47309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:03.531523   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:03.540694   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:03.540761   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549498   47309 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549525   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:03.687202   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.779117   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.091878038s)
	I0626 20:47:04.779156   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.983470   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.059963   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.136199   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:05.136282   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:05.663265   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:06.163057   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:04.280330   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.280447   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.292565   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:04.780127   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.780225   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.797554   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.279900   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.279986   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.297853   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.779501   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.779594   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.794314   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.279916   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.280001   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.296829   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.779473   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.779566   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.793302   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.279802   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.279888   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.292407   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.779813   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.779914   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.793591   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.279846   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.279935   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.292196   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.779753   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.779859   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.792362   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.450681   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451186   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451216   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:05.451117   48355 retry.go:31] will retry after 3.442192384s: waiting for machine to come up
	I0626 20:47:08.895024   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:08.895520   48355 retry.go:31] will retry after 4.272351839s: waiting for machine to come up
	I0626 20:47:06.662926   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.163275   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.662871   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.689321   47309 api_server.go:72] duration metric: took 2.55312002s to wait for apiserver process to appear ...
	I0626 20:47:07.689348   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:07.689366   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:10.879412   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:10.879439   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:11.379823   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.386705   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.386736   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:11.880574   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.892733   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.892768   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:12.380392   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:12.389894   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:47:12.400274   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:12.400307   47309 api_server.go:131] duration metric: took 4.710951407s to wait for apiserver health ...
	I0626 20:47:12.400320   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:47:12.400332   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:12.402355   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:09.280409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:09.280512   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:09.293009   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:09.767593   47605 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:09.767636   47605 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:09.767648   47605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:09.767705   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:09.800380   47605 cri.go:89] found id: ""
	I0626 20:47:09.800465   47605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:09.819239   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:09.830482   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:09.830547   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840424   47605 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840451   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:09.979898   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.746785   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.960847   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.041569   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.122238   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:11.122322   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:11.640034   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.140386   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.640370   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.139901   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.639546   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.663848   47605 api_server.go:72] duration metric: took 2.54160148s to wait for apiserver process to appear ...
	I0626 20:47:13.663874   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:13.663905   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:14.587552   46683 start.go:369] acquired machines lock for "old-k8s-version-490377" in 55.268521785s
	I0626 20:47:14.587610   46683 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:47:14.587622   46683 fix.go:54] fixHost starting: 
	I0626 20:47:14.588035   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:47:14.588074   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:47:14.607186   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0626 20:47:14.607765   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:47:14.608361   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:47:14.608384   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:47:14.608697   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:47:14.608908   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:14.609056   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:47:14.610765   46683 fix.go:102] recreateIfNeeded on old-k8s-version-490377: state=Stopped err=<nil>
	I0626 20:47:14.610791   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	W0626 20:47:14.611905   46683 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:47:14.613885   46683 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490377" ...
	I0626 20:47:13.169996   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.170568   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Found IP for machine: 192.168.61.238
	I0626 20:47:13.170601   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserving static IP address...
	I0626 20:47:13.170622   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has current primary IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.171048   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.171080   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserved static IP address: 192.168.61.238
	I0626 20:47:13.171107   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | skip adding static IP to network mk-default-k8s-diff-port-473235 - found existing host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"}
	I0626 20:47:13.171128   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Getting to WaitForSSH function...
	I0626 20:47:13.171141   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for SSH to be available...
	I0626 20:47:13.173755   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174235   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.174265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174442   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH client type: external
	I0626 20:47:13.174485   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa (-rw-------)
	I0626 20:47:13.174518   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:13.174538   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | About to run SSH command:
	I0626 20:47:13.174553   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | exit 0
	I0626 20:47:13.265799   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:13.266189   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetConfigRaw
	I0626 20:47:13.266850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.269749   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270212   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.270253   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270498   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:47:13.270732   47779 machine.go:88] provisioning docker machine ...
	I0626 20:47:13.270758   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:13.270959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271112   47779 buildroot.go:166] provisioning hostname "default-k8s-diff-port-473235"
	I0626 20:47:13.271134   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.273679   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274087   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.274135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274273   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.274446   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274618   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274747   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.274940   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.275353   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.275369   47779 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-473235 && echo "default-k8s-diff-port-473235" | sudo tee /etc/hostname
	I0626 20:47:13.416565   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-473235
	
	I0626 20:47:13.416595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.420132   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420596   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.420670   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.421172   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421392   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.421821   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.422425   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.422457   47779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-473235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-473235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-473235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:13.566095   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:13.566131   47779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:13.566175   47779 buildroot.go:174] setting up certificates
	I0626 20:47:13.566192   47779 provision.go:83] configureAuth start
	I0626 20:47:13.566206   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.566509   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.569795   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570251   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.570283   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570476   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.573020   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573439   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.573475   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573704   47779 provision.go:138] copyHostCerts
	I0626 20:47:13.573782   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:13.573795   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:13.573859   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:13.573976   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:13.573987   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:13.574016   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:13.574094   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:13.574108   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:13.574134   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:13.574199   47779 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-473235 san=[192.168.61.238 192.168.61.238 localhost 127.0.0.1 minikube default-k8s-diff-port-473235]
	I0626 20:47:13.795155   47779 provision.go:172] copyRemoteCerts
	I0626 20:47:13.795207   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:13.795230   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.798039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798457   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.798512   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798706   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.798918   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.799130   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.799274   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:13.892185   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:13.921840   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 20:47:13.951311   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:13.980185   47779 provision.go:86] duration metric: configureAuth took 413.976937ms
	I0626 20:47:13.980216   47779 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:13.980460   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:47:13.980551   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.983814   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984217   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.984265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984604   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.984826   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985010   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985144   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.985344   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.985947   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.985979   47779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:14.317679   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:14.317702   47779 machine.go:91] provisioned docker machine in 1.046953094s
	I0626 20:47:14.317713   47779 start.go:300] post-start starting for "default-k8s-diff-port-473235" (driver="kvm2")
	I0626 20:47:14.317723   47779 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:14.317744   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.318064   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:14.318101   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.321001   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321358   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.321408   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321598   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.321806   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.321986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.322139   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.414722   47779 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:14.419797   47779 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:14.419822   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:14.419895   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:14.419990   47779 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:14.420118   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:14.430766   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:14.458086   47779 start.go:303] post-start completed in 140.355388ms
	I0626 20:47:14.458107   47779 fix.go:56] fixHost completed within 21.823695632s
	I0626 20:47:14.458125   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.460953   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461277   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.461308   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461472   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.461651   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.461841   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.462025   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.462175   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:14.462805   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:14.462823   47779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:14.587374   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812434.534091475
	
	I0626 20:47:14.587395   47779 fix.go:206] guest clock: 1687812434.534091475
	I0626 20:47:14.587403   47779 fix.go:219] Guest: 2023-06-26 20:47:14.534091475 +0000 UTC Remote: 2023-06-26 20:47:14.458110543 +0000 UTC m=+159.266861615 (delta=75.980932ms)
	I0626 20:47:14.587446   47779 fix.go:190] guest clock delta is within tolerance: 75.980932ms
	I0626 20:47:14.587456   47779 start.go:83] releasing machines lock for "default-k8s-diff-port-473235", held for 21.953095935s
	I0626 20:47:14.587492   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.587776   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:14.590654   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591111   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.591143   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591332   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.591869   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592074   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592151   47779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:14.592205   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.592451   47779 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:14.592489   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.595039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595271   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595585   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595615   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595659   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595698   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595901   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596076   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596118   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596311   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596344   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596466   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.596622   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.683637   47779 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:14.713738   47779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:14.869873   47779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:14.877719   47779 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:14.877815   47779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:14.893656   47779 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:14.893682   47779 start.go:466] detecting cgroup driver to use...
	I0626 20:47:14.893738   47779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:14.908419   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:14.921730   47779 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:14.921812   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:14.940659   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:14.955010   47779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:15.062849   47779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:15.193682   47779 docker.go:212] disabling docker service ...
	I0626 20:47:15.193810   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:15.210855   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:15.223362   47779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:15.348648   47779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:15.471398   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:15.496137   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:15.523967   47779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:47:15.524041   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.537188   47779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:15.537258   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.550404   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.563577   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.574958   47779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:15.588685   47779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:15.600611   47779 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:15.600680   47779 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:15.615658   47779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:15.628004   47779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:15.763410   47779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:15.982719   47779 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:15.982799   47779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:15.990799   47779 start.go:534] Will wait 60s for crictl version
	I0626 20:47:15.990864   47779 ssh_runner.go:195] Run: which crictl
	I0626 20:47:15.997709   47779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:16.041802   47779 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:16.041893   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.094989   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.151324   47779 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:47:12.403841   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:12.420028   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:12.459593   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:12.486209   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:12.486256   47309 system_pods.go:61] "coredns-5d78c9869d-dwkng" [8919aa0b-b8b6-4672-aa75-ea5ea1d27ef6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:12.486270   47309 system_pods.go:61] "etcd-no-preload-934450" [67a1367b-dc99-4613-8a75-796a64f13f0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:12.486281   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [7452cf79-3e8f-4dce-922a-a52115c7059f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:12.486291   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [a3393645-4d3d-4fab-a32f-c15ff3bfcdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:12.486300   47309 system_pods.go:61] "kube-proxy-phrv2" [d08fdd52-cc2a-43cb-84c4-170ad241527e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:12.486310   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [cc1c89f8-925a-4847-b693-08fbc4905119] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:12.486319   47309 system_pods.go:61] "metrics-server-74d5c6b9c-7szm5" [d94c68f7-4521-4366-b5db-38f420a78dd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:12.486331   47309 system_pods.go:61] "storage-provisioner" [7aa74f96-c306-4d70-a211-715b4877b15b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:12.486341   47309 system_pods.go:74] duration metric: took 26.722879ms to wait for pod list to return data ...
	I0626 20:47:12.486359   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:12.490745   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:12.490784   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:12.490809   47309 node_conditions.go:105] duration metric: took 4.437855ms to run NodePressure ...
	I0626 20:47:12.490830   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:12.794912   47309 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800827   47309 kubeadm.go:787] kubelet initialised
	I0626 20:47:12.800855   47309 kubeadm.go:788] duration metric: took 5.915334ms waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800865   47309 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:12.807162   47309 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:14.822450   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:14.614985   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Start
	I0626 20:47:14.615159   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring networks are active...
	I0626 20:47:14.615866   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network default is active
	I0626 20:47:14.616331   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network mk-old-k8s-version-490377 is active
	I0626 20:47:14.616785   46683 main.go:141] libmachine: (old-k8s-version-490377) Getting domain xml...
	I0626 20:47:14.617507   46683 main.go:141] libmachine: (old-k8s-version-490377) Creating domain...
	I0626 20:47:16.055502   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting to get IP...
	I0626 20:47:16.056448   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.056913   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.057009   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.056935   48478 retry.go:31] will retry after 281.770624ms: waiting for machine to come up
	I0626 20:47:16.340685   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.341472   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.341496   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.341268   48478 retry.go:31] will retry after 249.185886ms: waiting for machine to come up
	I0626 20:47:16.591867   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.592547   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.592718   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.592671   48478 retry.go:31] will retry after 327.814159ms: waiting for machine to come up
	I0626 20:47:17.910025   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:17.910061   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:18.411167   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.425310   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.425345   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:18.910567   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.920897   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.920933   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:19.410736   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:19.418228   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:47:19.428516   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:19.428551   47605 api_server.go:131] duration metric: took 5.764669652s to wait for apiserver health ...
	I0626 20:47:19.428561   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:47:19.428573   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:19.430711   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:16.152563   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:16.156250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156617   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:16.156644   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156894   47779 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:16.162480   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:16.180283   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:47:16.180336   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:16.227399   47779 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:47:16.227474   47779 ssh_runner.go:195] Run: which lz4
	I0626 20:47:16.233720   47779 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:16.240423   47779 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:16.240463   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:47:18.263416   47779 crio.go:444] Took 2.029753 seconds to copy over tarball
	I0626 20:47:18.263515   47779 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:16.837607   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:19.361799   47309 pod_ready.go:92] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.361869   47309 pod_ready.go:81] duration metric: took 6.554677083s waiting for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.361886   47309 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370122   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.370145   47309 pod_ready.go:81] duration metric: took 8.249243ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370157   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391052   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:21.391082   47309 pod_ready.go:81] duration metric: took 2.020917194s waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391096   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:16.922381   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.922923   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.922952   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.922873   48478 retry.go:31] will retry after 486.21568ms: waiting for machine to come up
	I0626 20:47:17.410676   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:17.411282   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:17.411305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:17.411227   48478 retry.go:31] will retry after 606.277374ms: waiting for machine to come up
	I0626 20:47:18.020296   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.021367   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.021400   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.021287   48478 retry.go:31] will retry after 576.843487ms: waiting for machine to come up
	I0626 20:47:18.599674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.600326   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.600352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.600221   48478 retry.go:31] will retry after 857.329718ms: waiting for machine to come up
	I0626 20:47:19.459545   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:19.460101   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:19.460125   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:19.460050   48478 retry.go:31] will retry after 1.017747035s: waiting for machine to come up
	I0626 20:47:20.479538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:20.480140   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:20.480178   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:20.480043   48478 retry.go:31] will retry after 1.379789146s: waiting for machine to come up
	I0626 20:47:19.432325   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:19.461944   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:19.498519   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:19.512703   47605 system_pods.go:59] 9 kube-system pods found
	I0626 20:47:19.512831   47605 system_pods.go:61] "coredns-5d78c9869d-dz48f" [87a67e95-a071-4865-902b-0e401e852456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512860   47605 system_pods.go:61] "coredns-5d78c9869d-lbfsr" [adee7e6b-88b2-412e-bb2d-fc0939bca149] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512905   47605 system_pods.go:61] "etcd-embed-certs-299839" [8aefd012-6a54-4e75-afc9-cc8385212eb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:19.512937   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [e178b5e8-445c-444f-965e-051233c2fa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:19.512971   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [e965e4af-a673-4b93-bb63-e7bfc0f9514d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:19.512995   47605 system_pods.go:61] "kube-proxy-q5khr" [6c11d667-3490-4417-8e0c-373fe25d06b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:19.513014   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [0385958c-3f22-4eb8-bdac-cbaeb52fe9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:19.513050   47605 system_pods.go:61] "metrics-server-74d5c6b9c-gb6b2" [b5a15d68-23ee-4274-a147-db6f2eef97e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:19.513074   47605 system_pods.go:61] "storage-provisioner" [42bd8483-f594-4bf9-8c32-9688d1d99523] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:19.513093   47605 system_pods.go:74] duration metric: took 14.550735ms to wait for pod list to return data ...
	I0626 20:47:19.513125   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:19.519356   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:19.519455   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:19.519513   47605 node_conditions.go:105] duration metric: took 6.36764ms to run NodePressure ...
	I0626 20:47:19.519573   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:19.935407   47605 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943592   47605 kubeadm.go:787] kubelet initialised
	I0626 20:47:19.943622   47605 kubeadm.go:788] duration metric: took 8.187833ms waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943633   47605 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:19.951319   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.957985   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958016   47605 pod_ready.go:81] duration metric: took 6.605612ms waiting for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.958027   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958037   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.965229   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965312   47605 pod_ready.go:81] duration metric: took 7.251456ms waiting for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.965335   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965391   47605 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:22.010596   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:21.752755   47779 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.48920102s)
	I0626 20:47:21.752790   47779 crio.go:451] Took 3.489344 seconds to extract the tarball
	I0626 20:47:21.752802   47779 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:21.800026   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:21.844486   47779 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:47:21.844504   47779 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:47:21.844573   47779 ssh_runner.go:195] Run: crio config
	I0626 20:47:21.924367   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:21.924397   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:21.924411   47779 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:21.924431   47779 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-473235 NodeName:default-k8s-diff-port-473235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:47:21.924593   47779 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-473235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:21.924685   47779 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-473235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0626 20:47:21.924756   47779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:47:21.934851   47779 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:21.934951   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:21.944791   47779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0626 20:47:21.963087   47779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:21.981936   47779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0626 20:47:22.002207   47779 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:22.006443   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:22.019555   47779 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235 for IP: 192.168.61.238
	I0626 20:47:22.019591   47779 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:22.019794   47779 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:22.019859   47779 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:22.019983   47779 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.key
	I0626 20:47:22.020069   47779 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key.761b3e7f
	I0626 20:47:22.020126   47779 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key
	I0626 20:47:22.020257   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:22.020296   47779 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:22.020309   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:22.020340   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:22.020376   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:22.020418   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:22.020475   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:22.021354   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:22.045205   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:22.069269   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:22.092387   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:22.120395   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:22.143199   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:22.167864   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:22.192223   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:22.218085   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:22.243249   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:22.269200   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:22.294015   47779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:22.313139   47779 ssh_runner.go:195] Run: openssl version
	I0626 20:47:22.319998   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:22.330864   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337082   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337144   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.343158   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:22.354507   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:22.366438   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371070   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371127   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.376858   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:22.387928   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:22.398665   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403091   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403139   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.410314   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:22.421729   47779 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:22.426373   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:22.432450   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:22.438093   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:22.446065   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:22.452103   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:22.457940   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:22.464492   47779 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:22.464647   47779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:22.464707   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:22.497723   47779 cri.go:89] found id: ""
	I0626 20:47:22.497803   47779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:22.508914   47779 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:22.508940   47779 kubeadm.go:636] restartCluster start
	I0626 20:47:22.508994   47779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:22.519855   47779 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:22.521400   47779 kubeconfig.go:92] found "default-k8s-diff-port-473235" server: "https://192.168.61.238:8444"
	I0626 20:47:22.525126   47779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:22.536252   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:22.536311   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:22.548698   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.049731   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.049805   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.062575   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.548966   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.549050   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.566351   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.048839   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.048917   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.065016   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.549110   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.549211   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.563150   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:25.049739   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.049828   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.066148   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.496598   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.496624   47309 pod_ready.go:81] duration metric: took 2.105519396s waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.496637   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504045   47309 pod_ready.go:92] pod "kube-proxy-phrv2" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.504067   47309 pod_ready.go:81] duration metric: took 7.42294ms waiting for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504078   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022096   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:25.022123   47309 pod_ready.go:81] duration metric: took 1.518037516s waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022135   47309 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.861798   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:21.981234   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:21.981272   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:21.862292   48478 retry.go:31] will retry after 2.138021733s: waiting for machine to come up
	I0626 20:47:24.002651   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:24.003184   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:24.003215   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:24.003122   48478 retry.go:31] will retry after 2.016131828s: waiting for machine to come up
	I0626 20:47:26.020987   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:26.021487   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:26.021511   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:26.021427   48478 retry.go:31] will retry after 2.317082546s: waiting for machine to come up
	I0626 20:47:24.497636   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:26.997525   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:27.997348   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:27.997394   47605 pod_ready.go:81] duration metric: took 8.031967272s waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:27.997408   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.548979   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.549054   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.566040   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.049569   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.049636   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.061513   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.548864   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.548952   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.566095   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.049674   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.049818   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.067169   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.549748   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.549831   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.568977   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.048852   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.048921   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.064935   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.549510   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.549614   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.562781   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.049396   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.049482   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.063237   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.548762   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.548853   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.561289   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:30.048758   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.048832   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.061079   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.040010   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:29.536317   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.537367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:28.340238   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:28.340738   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:28.340774   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:28.340660   48478 retry.go:31] will retry after 3.9887538s: waiting for machine to come up
	I0626 20:47:30.014224   47605 pod_ready.go:102] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.016636   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.016660   47605 pod_ready.go:81] duration metric: took 3.019245103s waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.016669   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022769   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.022794   47605 pod_ready.go:81] duration metric: took 6.118745ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022806   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.031975   47605 pod_ready.go:92] pod "kube-proxy-q5khr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.032004   47605 pod_ready.go:81] duration metric: took 9.189713ms waiting for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.032015   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040203   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.040231   47605 pod_ready.go:81] duration metric: took 8.207477ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040244   47605 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:33.054175   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:30.549812   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.549897   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.562540   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.049000   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.049071   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.061358   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.549602   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.549664   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.562690   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.049131   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:32.049223   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:32.061951   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.536775   47779 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:32.536827   47779 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:32.536843   47779 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:32.536914   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:32.571353   47779 cri.go:89] found id: ""
	I0626 20:47:32.571434   47779 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:32.588931   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:32.599519   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:32.599585   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610183   47779 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610212   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:32.738386   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.418561   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.612946   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.740311   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.830927   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:33.830992   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.372343   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.872109   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:33.542864   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:36.037521   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:32.332668   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:32.333139   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:32.333169   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:32.333084   48478 retry.go:31] will retry after 3.571549947s: waiting for machine to come up
	I0626 20:47:35.906478   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.906962   46683 main.go:141] libmachine: (old-k8s-version-490377) Found IP for machine: 192.168.72.111
	I0626 20:47:35.906994   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has current primary IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.907004   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserving static IP address...
	I0626 20:47:35.907527   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.907573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | skip adding static IP to network mk-old-k8s-version-490377 - found existing host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"}
	I0626 20:47:35.907588   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserved static IP address: 192.168.72.111
	I0626 20:47:35.907605   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting for SSH to be available...
	I0626 20:47:35.907658   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Getting to WaitForSSH function...
	I0626 20:47:35.909932   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910346   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.910383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH client type: external
	I0626 20:47:35.910573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa (-rw-------)
	I0626 20:47:35.910604   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:35.910620   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | About to run SSH command:
	I0626 20:47:35.910635   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | exit 0
	I0626 20:47:36.006056   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:36.006429   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetConfigRaw
	I0626 20:47:36.007160   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.010144   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010519   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.010551   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010863   46683 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/config.json ...
	I0626 20:47:36.011106   46683 machine.go:88] provisioning docker machine ...
	I0626 20:47:36.011130   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.011366   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011542   46683 buildroot.go:166] provisioning hostname "old-k8s-version-490377"
	I0626 20:47:36.011561   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011705   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.014236   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014643   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.014674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014821   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.015013   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015156   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015371   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.015595   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.016010   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.016029   46683 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490377 && echo "old-k8s-version-490377" | sudo tee /etc/hostname
	I0626 20:47:36.160735   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490377
	
	I0626 20:47:36.160797   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.163857   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164373   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.164425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164566   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.164778   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.164983   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.165128   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.165311   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.166001   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.166030   46683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:36.302740   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:36.302789   46683 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:36.302839   46683 buildroot.go:174] setting up certificates
	I0626 20:47:36.302852   46683 provision.go:83] configureAuth start
	I0626 20:47:36.302868   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.303151   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.305958   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306411   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.306439   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306667   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.309069   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309447   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.309480   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309538   46683 provision.go:138] copyHostCerts
	I0626 20:47:36.309622   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:36.309635   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:36.309702   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:36.309813   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:36.309830   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:36.309868   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:36.309938   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:36.309947   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:36.309970   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:36.310026   46683 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490377 san=[192.168.72.111 192.168.72.111 localhost 127.0.0.1 minikube old-k8s-version-490377]
	I0626 20:47:36.441131   46683 provision.go:172] copyRemoteCerts
	I0626 20:47:36.441183   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:36.441204   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.444557   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445034   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.445067   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445311   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.445540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.445700   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.445857   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:36.542375   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:36.570185   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:47:36.596725   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:36.622954   46683 provision.go:86] duration metric: configureAuth took 320.087643ms
	I0626 20:47:36.622983   46683 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:36.623205   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:47:36.623301   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.626305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626634   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.626666   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626856   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.627048   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627224   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627349   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.627520   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.627929   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.627954   46683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:36.963666   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:36.963695   46683 machine.go:91] provisioned docker machine in 952.57418ms
	I0626 20:47:36.963707   46683 start.go:300] post-start starting for "old-k8s-version-490377" (driver="kvm2")
	I0626 20:47:36.963719   46683 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:36.963747   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.964067   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:36.964099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.966948   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.967383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967528   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.967735   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.967900   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.968052   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.070309   46683 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:37.075040   46683 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:37.075064   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:37.075125   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:37.075208   46683 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:37.075306   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:37.086362   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:37.110475   46683 start.go:303] post-start completed in 146.752359ms
	I0626 20:47:37.110502   46683 fix.go:56] fixHost completed within 22.522880386s
	I0626 20:47:37.110525   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.113530   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.113925   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.113961   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.114168   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.114372   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114577   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114730   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.114896   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:37.115549   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:37.115572   46683 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:37.247352   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812457.183569581
	
	I0626 20:47:37.247376   46683 fix.go:206] guest clock: 1687812457.183569581
	I0626 20:47:37.247386   46683 fix.go:219] Guest: 2023-06-26 20:47:37.183569581 +0000 UTC Remote: 2023-06-26 20:47:37.110506986 +0000 UTC m=+360.350082215 (delta=73.062595ms)
	I0626 20:47:37.247410   46683 fix.go:190] guest clock delta is within tolerance: 73.062595ms
	I0626 20:47:37.247416   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 22.659832787s
	I0626 20:47:37.247442   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.247723   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:37.250740   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251154   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.251194   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251316   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.251835   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252015   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252101   46683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:37.252144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.252251   46683 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:37.252273   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.255147   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255231   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255440   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255464   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255584   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.255756   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.255765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255792   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255930   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.255946   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.256080   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.256099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.256206   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.256301   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.370571   46683 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:37.376548   46683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:37.531359   46683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:37.540038   46683 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:37.540104   46683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:37.556531   46683 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:37.556554   46683 start.go:466] detecting cgroup driver to use...
	I0626 20:47:37.556620   46683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:37.574430   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:37.586766   46683 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:37.586829   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:37.599572   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:37.612901   46683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:37.717489   46683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:37.851503   46683 docker.go:212] disabling docker service ...
	I0626 20:47:37.851576   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:37.864932   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:37.877087   46683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:37.990007   46683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:38.107613   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:38.122183   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:38.141502   46683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:47:38.141567   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.152052   46683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:38.152128   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.161786   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.172779   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.182823   46683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:38.192695   46683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:38.201322   46683 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:38.201404   46683 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:38.213549   46683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:38.225080   46683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:38.336249   46683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:38.508323   46683 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:38.508443   46683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:38.514430   46683 start.go:534] Will wait 60s for crictl version
	I0626 20:47:38.514496   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:38.518918   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:38.559642   46683 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:38.559731   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.616720   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.678573   46683 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0626 20:47:35.555132   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.053446   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:35.373039   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.872006   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.895929   47779 api_server.go:72] duration metric: took 2.064992302s to wait for apiserver process to appear ...
	I0626 20:47:35.895959   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:35.895982   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:35.896602   47779 api_server.go:269] stopped: https://192.168.61.238:8444/healthz: Get "https://192.168.61.238:8444/healthz": dial tcp 192.168.61.238:8444: connect: connection refused
	I0626 20:47:36.397305   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.868801   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.868839   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.868854   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.907251   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.907280   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.907310   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.921394   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.921428   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:40.397045   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.405040   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.405071   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:40.897690   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.904374   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.904424   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:41.396883   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:41.404743   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:47:41.420191   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:41.420219   47779 api_server.go:131] duration metric: took 5.524252602s to wait for apiserver health ...
	I0626 20:47:41.420231   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:41.420249   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:41.422187   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:38.537628   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:40.538267   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.680019   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:38.682934   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683263   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:38.683294   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683534   46683 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:38.687976   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:38.701534   46683 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 20:47:38.701610   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:38.739497   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:38.739584   46683 ssh_runner.go:195] Run: which lz4
	I0626 20:47:38.744080   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:38.748755   46683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:38.748792   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0626 20:47:40.654759   46683 crio.go:444] Took 1.910714 seconds to copy over tarball
	I0626 20:47:40.654830   46683 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:40.057751   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:42.555707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:41.423617   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:41.447117   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:41.485897   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:41.505667   47779 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:41.505714   47779 system_pods.go:61] "coredns-5d78c9869d-78zrr" [2927dce3-aa13-4ed4-b5a4-bc1b101ec044] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:41.505730   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [5bbba401-cfdd-4e97-ac44-3d1410344b23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:41.505742   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [90d064bc-d31f-4690-b100-8979cdd518c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:41.505755   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [3f686efe-3c90-42ed-a1b9-2cda3e7e49b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:41.505773   47779 system_pods.go:61] "kube-proxy-7t2dk" [bebeb55d-8c7d-4543-9ee1-adbd946904f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:41.505786   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [c2436cf6-0128-425c-9db3-b3d01e5fb5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:41.505799   47779 system_pods.go:61] "metrics-server-74d5c6b9c-swcxn" [81e42c6b-4c7d-40b1-bd4a-ccf7ce2dea17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:41.505811   47779 system_pods.go:61] "storage-provisioner" [18d1c7dc-00a6-4842-b441-f3468adde4ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:41.505822   47779 system_pods.go:74] duration metric: took 19.895923ms to wait for pod list to return data ...
	I0626 20:47:41.505833   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:41.515165   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:41.515201   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:41.515215   47779 node_conditions.go:105] duration metric: took 9.372368ms to run NodePressure ...
	I0626 20:47:41.515243   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:41.848353   47779 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854780   47779 kubeadm.go:787] kubelet initialised
	I0626 20:47:41.854805   47779 kubeadm.go:788] duration metric: took 6.420882ms waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854814   47779 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:41.861323   47779 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.867181   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867214   47779 pod_ready.go:81] duration metric: took 5.86597ms waiting for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.867225   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867235   47779 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.872900   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872928   47779 pod_ready.go:81] duration metric: took 5.684109ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.872940   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872948   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.881471   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881501   47779 pod_ready.go:81] duration metric: took 8.543041ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.881513   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881531   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.892246   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892292   47779 pod_ready.go:81] duration metric: took 10.741136ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.892310   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892325   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297272   47779 pod_ready.go:92] pod "kube-proxy-7t2dk" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:43.297299   47779 pod_ready.go:81] duration metric: took 1.404965565s waiting for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297308   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:42.544224   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.846930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.389432   46683 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.73456858s)
	I0626 20:47:44.389462   46683 crio.go:451] Took 3.734677 seconds to extract the tarball
	I0626 20:47:44.389480   46683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:44.438169   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:44.478220   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:44.478250   46683 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:47:44.478337   46683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.478364   46683 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.478383   46683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.478384   46683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.478450   46683 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0626 20:47:44.478365   46683 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.478345   46683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.478339   46683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479752   46683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.479758   46683 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.479760   46683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.479759   46683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.479748   46683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.479802   46683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.479810   46683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479817   46683 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0626 20:47:44.681554   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720619   46683 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0626 20:47:44.720677   46683 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720730   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.724810   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.753258   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0626 20:47:44.765072   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.767167   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.768723   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0626 20:47:44.769466   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.769474   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.807428   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.904206   46683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0626 20:47:44.904243   46683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0626 20:47:44.904250   46683 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.904261   46683 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926166   46683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0626 20:47:44.926203   46683 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.926204   46683 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0626 20:47:44.926222   46683 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.926222   46683 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0626 20:47:44.926248   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926247   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926251   46683 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0626 20:47:44.926365   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936135   46683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0626 20:47:44.936175   46683 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.936236   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936252   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.936274   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.940272   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.940352   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0626 20:47:44.940409   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.952147   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:45.031640   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0626 20:47:45.031677   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0626 20:47:45.061947   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0626 20:47:45.062070   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0626 20:47:45.062166   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0626 20:47:45.062261   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.062279   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0626 20:47:45.067511   46683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0626 20:47:45.067561   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0626 20:47:45.094726   46683 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.094780   46683 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.384887   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:45.947601   46683 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0626 20:47:45.947707   46683 cache_images.go:92] LoadImages completed in 1.469441722s
	W0626 20:47:45.947778   46683 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0626 20:47:45.947863   46683 ssh_runner.go:195] Run: crio config
	I0626 20:47:46.009928   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:47:46.009955   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:46.009968   46683 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:46.009987   46683 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490377 NodeName:old-k8s-version-490377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 20:47:46.010140   46683 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490377"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-490377
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.111:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:46.010224   46683 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490377 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:47:46.010284   46683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0626 20:47:46.023111   46683 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:46.023196   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:46.034988   46683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0626 20:47:46.056824   46683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:46.077802   46683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0626 20:47:46.102465   46683 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:46.107391   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:46.121242   46683 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377 for IP: 192.168.72.111
	I0626 20:47:46.121277   46683 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:46.121466   46683 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:46.121520   46683 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:46.121635   46683 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.key
	I0626 20:47:46.121735   46683 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key.760f2aeb
	I0626 20:47:46.121789   46683 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key
	I0626 20:47:46.121928   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:46.121970   46683 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:46.121985   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:46.122024   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:46.122063   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:46.122098   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:46.122158   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:46.123026   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:46.149101   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:46.179305   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:46.207421   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:46.233407   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:46.259148   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:46.284728   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:46.312152   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:46.341061   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:46.370455   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:46.398160   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:46.424710   46683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:46.446379   46683 ssh_runner.go:195] Run: openssl version
	I0626 20:47:46.452825   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:46.466808   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472676   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472760   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.479077   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:46.490061   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:46.501801   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.506966   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.507034   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.513146   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:46.523600   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:46.534659   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540612   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540677   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.548499   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:46.562786   46683 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:46.569679   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:46.576129   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:46.582331   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:46.588334   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:46.595635   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:46.603058   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:46.611126   46683 kubeadm.go:404] StartCluster: {Name:old-k8s-version-490377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:46.611211   46683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:46.611277   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:46.650099   46683 cri.go:89] found id: ""
	I0626 20:47:46.650177   46683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:46.660940   46683 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:46.660964   46683 kubeadm.go:636] restartCluster start
	I0626 20:47:46.661022   46683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:46.671400   46683 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:46.672450   46683 kubeconfig.go:92] found "old-k8s-version-490377" server: "https://192.168.72.111:8443"
	I0626 20:47:46.675477   46683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:46.684496   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:46.684568   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:46.695719   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:45.056085   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.554295   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:45.865956   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:48.003697   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.505286   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:49.505314   47779 pod_ready.go:81] duration metric: took 6.207998312s waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:49.505328   47779 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:47.037142   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.037207   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.535460   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.196149   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.196252   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.211751   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:47.696286   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.696381   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.707472   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.195967   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.196041   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.207809   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.696375   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.696449   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.708571   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.196097   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.196176   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.207717   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.696692   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.696768   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.708954   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.196531   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.196611   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.209111   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.696563   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.696648   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.708744   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.196237   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.196305   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.207654   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.695908   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.695988   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.708029   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.056186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.057083   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.519442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.520019   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.536833   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.036673   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.196170   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.196233   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.208953   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:52.696518   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.696600   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.707537   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.196046   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.196113   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.207272   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.695791   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.695873   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.706845   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.196452   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.196530   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.208048   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.696169   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.696236   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.707640   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.195889   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.195968   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.207560   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.695899   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.695978   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.707573   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.195900   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:56.195973   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:56.207335   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.685138   46683 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:56.685165   46683 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:56.685180   46683 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:56.685239   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:56.719427   46683 cri.go:89] found id: ""
	I0626 20:47:56.719494   46683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:56.735328   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:56.747355   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:56.747420   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756129   46683 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756156   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:54.554213   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:57.052902   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:59.055349   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.018337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.025514   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.039195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.538216   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.883656   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.423073   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.641018   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.751205   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.840521   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:57.840645   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.355178   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.854929   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.355164   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.385611   46683 api_server.go:72] duration metric: took 1.545094971s to wait for apiserver process to appear ...
	I0626 20:47:59.385632   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:59.385650   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:01.553510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.554922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.520442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.021809   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.040767   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.535801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:04.386860   46683 api_server.go:269] stopped: https://192.168.72.111:8443/healthz: Get "https://192.168.72.111:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0626 20:48:04.888001   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:05.958461   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:48:05.958486   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:48:05.958498   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.017029   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.017061   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.387577   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.394038   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.394072   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.887033   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.902891   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.902931   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:07.387632   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:07.393827   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:48:07.402591   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:48:07.402618   46683 api_server.go:131] duration metric: took 8.016980167s to wait for apiserver health ...
	I0626 20:48:07.402628   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:48:07.402639   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:48:07.404494   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:48:06.054185   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:08.055165   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.520306   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.521293   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:10.021358   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.537058   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:09.537801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.405919   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:48:07.416748   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:48:07.436249   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:48:07.445695   46683 system_pods.go:59] 7 kube-system pods found
	I0626 20:48:07.445732   46683 system_pods.go:61] "coredns-5644d7b6d9-5lcxw" [8e1a5fff-55d8-4d32-ae6f-c7694c8b5878] Running
	I0626 20:48:07.445741   46683 system_pods.go:61] "etcd-old-k8s-version-490377" [3fff7ab3-7ac7-4417-b3b8-9794f427c880] Running
	I0626 20:48:07.445750   46683 system_pods.go:61] "kube-apiserver-old-k8s-version-490377" [1b8e6b87-0b15-4586-8133-2dd33ac0b069] Running
	I0626 20:48:07.445771   46683 system_pods.go:61] "kube-controller-manager-old-k8s-version-490377" [2635a03c-884d-4245-a8ef-cb02e14443b8] Running
	I0626 20:48:07.445792   46683 system_pods.go:61] "kube-proxy-64btm" [0a8ee3c6-93a1-4989-94d0-209e8c655a64] Running
	I0626 20:48:07.445805   46683 system_pods.go:61] "kube-scheduler-old-k8s-version-490377" [2a6905a0-4f64-4cab-9b6d-55c708c07f8d] Running
	I0626 20:48:07.445815   46683 system_pods.go:61] "storage-provisioner" [9bf36874-b862-41f9-89d4-2d900adc2003] Running
	I0626 20:48:07.445826   46683 system_pods.go:74] duration metric: took 9.553318ms to wait for pod list to return data ...
	I0626 20:48:07.445836   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:48:07.450777   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:48:07.450816   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:48:07.450831   46683 node_conditions.go:105] duration metric: took 4.985221ms to run NodePressure ...
	I0626 20:48:07.450854   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:48:07.693070   46683 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:48:07.696336   46683 retry.go:31] will retry after 291.332727ms: kubelet not initialised
	I0626 20:48:07.992856   46683 retry.go:31] will retry after 210.561512ms: kubelet not initialised
	I0626 20:48:08.208369   46683 retry.go:31] will retry after 371.110023ms: kubelet not initialised
	I0626 20:48:08.585342   46683 retry.go:31] will retry after 1.199452561s: kubelet not initialised
	I0626 20:48:09.790625   46683 retry.go:31] will retry after 923.734482ms: kubelet not initialised
	I0626 20:48:10.719166   46683 retry.go:31] will retry after 1.019822632s: kubelet not initialised
	I0626 20:48:11.743554   46683 retry.go:31] will retry after 3.253867153s: kubelet not initialised
	I0626 20:48:10.552964   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.554534   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.520923   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.019384   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.036991   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:14.536734   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.002028   46683 retry.go:31] will retry after 2.234934883s: kubelet not initialised
	I0626 20:48:14.556223   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.053741   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.054276   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.021470   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.519794   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.036192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.036285   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:21.037136   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.242709   46683 retry.go:31] will retry after 6.079359776s: kubelet not initialised
	I0626 20:48:21.054851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.553653   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:22.020435   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:24.022102   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.037271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:25.037337   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.328332   46683 retry.go:31] will retry after 12.999865358s: kubelet not initialised
	I0626 20:48:25.553983   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.052253   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:26.518782   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.520217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:27.535792   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:29.536336   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:30.055419   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.553794   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:31.018773   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:33.020048   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:35.021492   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.036513   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:34.037364   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.535663   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.334795   46683 retry.go:31] will retry after 13.541680893s: kubelet not initialised
	I0626 20:48:35.052975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.053634   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.053672   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.519603   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.520279   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:38.536271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:40.536344   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.553411   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.554235   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.520569   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.522354   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:42.536811   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.035291   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.554795   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.053080   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:46.019919   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.021534   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:47.036908   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.537386   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.882566   46683 kubeadm.go:787] kubelet initialised
	I0626 20:48:49.882597   46683 kubeadm.go:788] duration metric: took 42.189498896s waiting for restarted kubelet to initialise ...
	I0626 20:48:49.882608   46683 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:48:49.888018   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894462   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.894488   46683 pod_ready.go:81] duration metric: took 6.438689ms waiting for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894501   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899336   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.899358   46683 pod_ready.go:81] duration metric: took 4.848554ms waiting for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899370   46683 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903574   46683 pod_ready.go:92] pod "etcd-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.903593   46683 pod_ready.go:81] duration metric: took 4.21548ms waiting for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903605   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908052   46683 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.908071   46683 pod_ready.go:81] duration metric: took 4.457812ms waiting for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908091   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281099   46683 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.281124   46683 pod_ready.go:81] duration metric: took 373.02512ms waiting for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281139   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681520   46683 pod_ready.go:92] pod "kube-proxy-64btm" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.681541   46683 pod_ready.go:81] duration metric: took 400.395983ms waiting for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681552   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081638   46683 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:51.081657   46683 pod_ready.go:81] duration metric: took 400.09969ms waiting for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081666   46683 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.053581   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.053802   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:50.520090   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.019821   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.020035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.037008   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.037516   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:56.037585   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.491534   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.989758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.552843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.054370   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.020770   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.520039   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.535930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.536377   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.488491   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.489659   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.552927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.056474   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:01.520560   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.019945   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.536728   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.537724   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.989651   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.989796   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.552707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.553918   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:08.554230   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.520608   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.020075   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:07.036576   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.537071   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.990147   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.489229   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.053576   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:13.054110   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.519744   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.020968   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:12.037949   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.537389   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.989856   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.488429   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.490529   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:15.553553   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.054036   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.519975   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.520288   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:17.036172   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:19.036248   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.036421   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.989943   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.990154   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.553570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.554626   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.020817   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.520602   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.036595   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.038742   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.990299   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:24.994358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.053465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.053635   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.520912   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:28.020413   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.537294   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.489707   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.990957   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.552847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:31.554360   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.052585   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:30.520207   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.521484   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:35.020064   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.035666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.036325   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.535889   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.489468   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.989668   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.556092   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.054617   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:37.519850   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:40.020217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.036499   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.537332   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.992357   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.489925   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.553528   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.052935   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:42.520450   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.520634   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.035299   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.036688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.990255   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.489449   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.553009   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.553560   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:47.018978   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.020289   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.535753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.536227   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.990710   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.490459   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.553710   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.054824   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.520532   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:54.027509   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:52.537108   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.036452   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.989608   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.990105   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.990610   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.552894   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.553520   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:56.519796   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.021401   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.537189   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.537365   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.991065   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.489396   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.053139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.062882   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:01.519625   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:03.520031   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.037036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.988698   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.991107   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.551742   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:06.553955   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.053612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:05.520676   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:08.019671   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:10.021418   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.035613   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.036666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.536861   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.488874   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.490059   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.492236   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.553481   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.054574   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:12.518824   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.519670   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.036399   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.537496   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:13.990228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.488219   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.054609   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.553511   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.519795   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.520535   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:19.037355   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.037964   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.488819   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:20.489536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.053521   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.553922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.021035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.519784   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.535974   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.536845   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:22.988574   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:24.990088   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:26.052017   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.054905   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.520011   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.019323   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.019500   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.537999   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.036187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.488859   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:29.990482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.551701   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.554272   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.019810   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.023728   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.036817   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.042849   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.536415   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.488492   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.491986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:35.053986   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:37.055115   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.520551   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.019307   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:38.537119   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:40.537474   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.991471   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.489241   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.490458   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.552836   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.553914   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:44.052850   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.020033   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.520646   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.036648   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:45.036959   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.990768   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.489482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.053271   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.553811   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.018851   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.021042   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.021254   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:47.536099   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.036995   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.489670   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.990231   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.554677   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.053841   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.520067   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.021727   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.042201   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:54.536260   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.489402   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.492509   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.055031   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.055181   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.521342   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.020905   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.036992   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.037534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:01.538152   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.993709   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.488776   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.555263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.054478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.519672   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:05.020878   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.036330   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.036424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.489742   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.988712   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.555161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.052680   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.055326   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.519641   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.520120   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.536306   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:10.537094   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.988973   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.989715   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.488986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.554973   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.054638   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.019264   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.020253   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.537126   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.037318   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:13.490053   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.988498   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.055193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:18.553665   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.522548   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.020609   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.536765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.037132   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.990230   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.991216   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.555044   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.055590   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:21.520052   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.520574   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:22.038085   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.535549   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.022544   47309 pod_ready.go:81] duration metric: took 4m0.000394525s waiting for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:25.022570   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:25.022598   47309 pod_ready.go:38] duration metric: took 4m12.221722724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:25.022623   47309 kubeadm.go:640] restartCluster took 4m31.561880232s
	W0626 20:51:25.022684   47309 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:25.022722   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:22.489438   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.490731   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.554637   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:27.555070   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.020700   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.520337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.990408   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.990900   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.490197   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:30.053627   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.041205   47605 pod_ready.go:81] duration metric: took 4m0.000945978s waiting for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:31.041235   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:31.041252   47605 pod_ready.go:38] duration metric: took 4m11.097608636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:31.041297   47605 kubeadm.go:640] restartCluster took 4m31.299321581s
	W0626 20:51:31.041365   47605 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:31.041409   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:31.019045   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.022453   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.492871   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.989984   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.520977   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:37.521128   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.021691   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:38.489349   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.989368   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.519812   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:44.520689   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.989461   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:45.491205   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:47.019936   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.506391   47779 pod_ready.go:81] duration metric: took 4m0.001048325s waiting for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:49.506423   47779 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:49.506441   47779 pod_ready.go:38] duration metric: took 4m7.651614118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:49.506483   47779 kubeadm.go:640] restartCluster took 4m26.997522391s
	W0626 20:51:49.506561   47779 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:49.506595   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:47.990134   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.990758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:52.489144   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:54.990008   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:56.650050   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.627303734s)
	I0626 20:51:56.650132   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:51:56.665246   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:51:56.678749   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:51:56.690413   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:51:56.690459   47309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:51:56.757308   47309 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:51:56.757415   47309 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:51:56.915845   47309 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:51:56.916021   47309 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:51:56.916158   47309 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:51:57.137465   47309 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:51:57.139330   47309 out.go:204]   - Generating certificates and keys ...
	I0626 20:51:57.139431   47309 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:51:57.139514   47309 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:51:57.139648   47309 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:51:57.139718   47309 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:51:57.139852   47309 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:51:57.139914   47309 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:51:57.139997   47309 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:51:57.140101   47309 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:51:57.140224   47309 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:51:57.140830   47309 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:51:57.141343   47309 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:51:57.141471   47309 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:51:57.294061   47309 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:51:57.436714   47309 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:51:57.707612   47309 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:51:57.875383   47309 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:51:57.893698   47309 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:51:57.895257   47309 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:51:57.895427   47309 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:51:58.020261   47309 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:51:58.022209   47309 out.go:204]   - Booting up control plane ...
	I0626 20:51:58.022349   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:51:58.023359   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:51:58.024253   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:51:58.026955   47309 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:51:58.032948   47309 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:51:57.489729   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:59.490578   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:01.491617   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:05.539291   47309 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505351 seconds
	I0626 20:52:05.539449   47309 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:05.564127   47309 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:06.097928   47309 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:06.098155   47309 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:06.617147   47309 kubeadm.go:322] [bootstrap-token] Using token: 7fs1fc.9teiyerfkduv7ctw
	I0626 20:52:03.989716   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.489773   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.618462   47309 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:06.618602   47309 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:06.631936   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:06.655354   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:06.662468   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:06.673817   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:06.680979   47309 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:06.717394   47309 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:07.015067   47309 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:07.079315   47309 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:07.079362   47309 kubeadm.go:322] 
	I0626 20:52:07.079450   47309 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:07.079464   47309 kubeadm.go:322] 
	I0626 20:52:07.079544   47309 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:07.079556   47309 kubeadm.go:322] 
	I0626 20:52:07.079597   47309 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:07.079680   47309 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:07.079765   47309 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:07.079782   47309 kubeadm.go:322] 
	I0626 20:52:07.079867   47309 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:07.079880   47309 kubeadm.go:322] 
	I0626 20:52:07.079960   47309 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:07.079971   47309 kubeadm.go:322] 
	I0626 20:52:07.080038   47309 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:07.080123   47309 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:07.080233   47309 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:07.080249   47309 kubeadm.go:322] 
	I0626 20:52:07.080370   47309 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:07.080467   47309 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:07.080481   47309 kubeadm.go:322] 
	I0626 20:52:07.080574   47309 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.080692   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:07.080738   47309 kubeadm.go:322] 	--control-plane 
	I0626 20:52:07.080756   47309 kubeadm.go:322] 
	I0626 20:52:07.080858   47309 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:07.080870   47309 kubeadm.go:322] 
	I0626 20:52:07.080979   47309 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.081124   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:07.082329   47309 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.082353   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:52:07.082369   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:07.084307   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:07.804074   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (36.762635025s)
	I0626 20:52:07.804158   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:07.819772   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:07.830166   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:07.839585   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:07.839633   47605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:08.061341   47605 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.085644   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:07.113105   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:07.158420   47309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:07.158542   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.158590   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=no-preload-934450 minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.637925   47309 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:07.638078   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.262589   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.762326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.262326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.762334   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.262485   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.762376   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:11.262645   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.490810   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:10.990521   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:11.762599   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.262690   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.762512   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.262844   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.762234   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.262587   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.762670   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.262293   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.763106   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:16.263264   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.991151   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:15.489549   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:19.659464   47605 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:19.659534   47605 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:19.659620   47605 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:19.659793   47605 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:19.659913   47605 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:19.659993   47605 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:19.661681   47605 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:19.661770   47605 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:19.661860   47605 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:19.661969   47605 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:19.662065   47605 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:19.662158   47605 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:19.662226   47605 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:19.662321   47605 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:19.662401   47605 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:19.662487   47605 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:19.662595   47605 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:19.662649   47605 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:19.662717   47605 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:19.662779   47605 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:19.662849   47605 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:19.662928   47605 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:19.663014   47605 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:19.663128   47605 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:19.663231   47605 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:19.663286   47605 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:19.663370   47605 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:19.664951   47605 out.go:204]   - Booting up control plane ...
	I0626 20:52:19.665063   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:19.665157   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:19.665246   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:19.665347   47605 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:19.665554   47605 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:19.665662   47605 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504998 seconds
	I0626 20:52:19.665792   47605 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:19.665948   47605 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:19.666027   47605 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:19.666278   47605 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-299839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:19.666360   47605 kubeadm.go:322] [bootstrap-token] Using token: e53kqf.6hnw5p7blg3e1mpb
	I0626 20:52:19.667988   47605 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:19.668104   47605 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:19.668203   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:19.668357   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:19.668500   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:19.668632   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:19.668732   47605 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:19.668890   47605 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:19.668953   47605 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:19.669024   47605 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:19.669042   47605 kubeadm.go:322] 
	I0626 20:52:19.669122   47605 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:19.669135   47605 kubeadm.go:322] 
	I0626 20:52:19.669243   47605 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:19.669253   47605 kubeadm.go:322] 
	I0626 20:52:19.669284   47605 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:19.669392   47605 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:19.669472   47605 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:19.669483   47605 kubeadm.go:322] 
	I0626 20:52:19.669561   47605 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:19.669571   47605 kubeadm.go:322] 
	I0626 20:52:19.669642   47605 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:19.669661   47605 kubeadm.go:322] 
	I0626 20:52:19.669724   47605 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:19.669831   47605 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:19.669941   47605 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:19.669951   47605 kubeadm.go:322] 
	I0626 20:52:19.670055   47605 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:19.670169   47605 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:19.670179   47605 kubeadm.go:322] 
	I0626 20:52:19.670283   47605 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670428   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:19.670469   47605 kubeadm.go:322] 	--control-plane 
	I0626 20:52:19.670484   47605 kubeadm.go:322] 
	I0626 20:52:19.670588   47605 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:19.670603   47605 kubeadm.go:322] 
	I0626 20:52:19.670715   47605 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670850   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:19.670863   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:52:19.670875   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:19.672750   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:16.762961   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.263008   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.762325   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.262618   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.762659   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.262343   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.763023   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.932557   47309 kubeadm.go:1081] duration metric: took 12.774065652s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:19.932647   47309 kubeadm.go:406] StartCluster complete in 5m26.514862376s
	I0626 20:52:19.932687   47309 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.932796   47309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:19.935445   47309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.935820   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:19.936149   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:19.936267   47309 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:19.936369   47309 addons.go:66] Setting storage-provisioner=true in profile "no-preload-934450"
	I0626 20:52:19.936388   47309 addons.go:228] Setting addon storage-provisioner=true in "no-preload-934450"
	W0626 20:52:19.936396   47309 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:19.936453   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.936890   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.936917   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.936996   47309 addons.go:66] Setting default-storageclass=true in profile "no-preload-934450"
	I0626 20:52:19.937022   47309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934450"
	I0626 20:52:19.937178   47309 addons.go:66] Setting metrics-server=true in profile "no-preload-934450"
	I0626 20:52:19.937198   47309 addons.go:228] Setting addon metrics-server=true in "no-preload-934450"
	W0626 20:52:19.937206   47309 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:19.937259   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.937461   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937485   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.937664   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937686   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.956754   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0626 20:52:19.956777   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0626 20:52:19.956923   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0626 20:52:19.957245   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957327   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957473   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957897   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.957918   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958063   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958078   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958217   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958240   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958385   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959001   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.959029   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.959257   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959323   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959523   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.960115   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.960168   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.980739   47309 addons.go:228] Setting addon default-storageclass=true in "no-preload-934450"
	W0626 20:52:19.980887   47309 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:19.980924   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.981308   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.981348   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.982528   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0626 20:52:19.982768   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0626 20:52:19.983398   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984115   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984291   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.984303   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.984767   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985276   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.985294   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.985346   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.985720   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985919   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.987605   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.989810   47309 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:19.991208   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:19.991229   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:19.991248   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:19.989487   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.997528   47309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:19.996110   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:19.996135   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999411   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:19.999436   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999495   47309 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:19.999511   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:19.999532   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.002886   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.003159   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.003321   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.004492   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.004806   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0626 20:52:20.004991   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.005018   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.005189   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.005234   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.005402   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.005568   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.005703   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.005881   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.005899   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.006233   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.006590   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:20.006614   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:20.022796   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0626 20:52:20.023252   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.023827   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.023852   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.024209   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.024425   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:20.026279   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:20.026527   47309 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.026542   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:20.026559   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.029302   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029775   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.029804   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029944   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.030138   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.030321   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.030454   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.331846   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.341298   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:20.352664   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:20.352693   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:20.376961   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:20.420573   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:20.420599   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:20.495388   47309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934450" context rescaled to 1 replicas
	I0626 20:52:20.495436   47309 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:20.497711   47309 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:20.499512   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:20.560528   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:20.560559   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:20.647734   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:21.308936   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.802312904s)
	I0626 20:52:21.309013   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:21.323340   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:21.333741   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:21.346686   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:21.346741   47779 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:21.427299   47779 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:21.427431   47779 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:21.598474   47779 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:21.598609   47779 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:21.598727   47779 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:21.802443   47779 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:17.989506   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:20.002885   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:21.804179   47779 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:21.804277   47779 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:21.804985   47779 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:21.805576   47779 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:21.806465   47779 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:21.807206   47779 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:21.807988   47779 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:21.808775   47779 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:21.809427   47779 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:21.810136   47779 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:21.810809   47779 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:21.811489   47779 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:21.811563   47779 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:22.127084   47779 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:22.371731   47779 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:22.635165   47779 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:22.843347   47779 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:22.866673   47779 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:22.868080   47779 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:22.868259   47779 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:23.015798   47779 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:22.468922   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.137025983s)
	I0626 20:52:22.468974   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.468988   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469285   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469339   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469359   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469390   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469315   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:22.469630   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469649   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469669   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469678   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469900   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469915   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597030   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.255690675s)
	I0626 20:52:23.597078   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.220078989s)
	I0626 20:52:23.597104   47309 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:23.597084   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597131   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597130   47309 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.097584802s)
	I0626 20:52:23.597162   47309 node_ready.go:35] waiting up to 6m0s for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.597463   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597463   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597489   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597499   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597508   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597879   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597931   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597950   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632416   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.984627683s)
	I0626 20:52:23.632472   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632485   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.632907   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.632919   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.632940   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632967   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632982   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.633279   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.633297   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.633307   47309 addons.go:464] Verifying addon metrics-server=true in "no-preload-934450"
	I0626 20:52:23.633353   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.635198   47309 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:19.674407   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:19.702224   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:19.744577   47605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=embed-certs-299839 minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.783628   47605 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:20.149671   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:20.782659   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.283295   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.782574   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.283137   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.782766   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.282641   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.783459   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.017432   47779 out.go:204]   - Booting up control plane ...
	I0626 20:52:23.017573   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:23.019187   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:23.020097   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:23.023559   47779 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:23.025808   47779 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:23.636740   47309 addons.go:499] enable addons completed in 3.700468963s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:23.637657   47309 node_ready.go:49] node "no-preload-934450" has status "Ready":"True"
	I0626 20:52:23.637673   47309 node_ready.go:38] duration metric: took 40.495678ms waiting for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.637684   47309 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:23.676466   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:25.699614   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:22.489080   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.490209   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.282506   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:24.782560   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.282565   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.783022   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.282856   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.783243   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.282657   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.783258   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.282802   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.783019   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.283285   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.782968   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.282489   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.782763   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.283126   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.445729   47605 kubeadm.go:1081] duration metric: took 11.701128618s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:31.445766   47605 kubeadm.go:406] StartCluster complete in 5m31.748710798s
	I0626 20:52:31.445787   47605 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.445873   47605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:31.448427   47605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.448700   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:31.448792   47605 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:31.448866   47605 addons.go:66] Setting storage-provisioner=true in profile "embed-certs-299839"
	I0626 20:52:31.448871   47605 addons.go:66] Setting default-storageclass=true in profile "embed-certs-299839"
	I0626 20:52:31.448884   47605 addons.go:228] Setting addon storage-provisioner=true in "embed-certs-299839"
	I0626 20:52:31.448885   47605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-299839"
	W0626 20:52:31.448892   47605 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:31.448938   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:31.448948   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.448986   47605 addons.go:66] Setting metrics-server=true in profile "embed-certs-299839"
	I0626 20:52:31.449006   47605 addons.go:228] Setting addon metrics-server=true in "embed-certs-299839"
	W0626 20:52:31.449013   47605 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:31.449053   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449762   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450455   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450635   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.450708   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.468787   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0626 20:52:31.469015   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0626 20:52:31.469401   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469497   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469929   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.469947   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470036   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.470073   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470548   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470605   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470723   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0626 20:52:31.470915   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.471202   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.471236   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.471374   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.471846   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.471871   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.481862   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.482471   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.482499   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.492391   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0626 20:52:31.493190   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.493807   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.493833   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.494190   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.494347   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.496376   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.499801   47605 addons.go:228] Setting addon default-storageclass=true in "embed-certs-299839"
	W0626 20:52:31.499822   47605 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:31.499851   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.500224   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.500253   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.506027   47605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:31.507267   47605 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.507286   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:31.507306   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.507954   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0626 20:52:31.508919   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.509350   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.509364   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.509784   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.510070   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.511452   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.513168   47605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:28.196489   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:30.196782   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:26.989644   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:29.488966   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.506536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.511805   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.512430   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.514510   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.514522   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:31.514530   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.514536   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:31.514555   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.514709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.514860   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.515029   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.517249   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517628   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.517653   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517774   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.517948   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.518282   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.518454   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.522114   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0626 20:52:31.522433   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.522982   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.523010   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.523416   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.523984   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.524019   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.545037   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0626 20:52:31.545523   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.546109   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.546140   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.546551   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.546826   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.549289   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.549597   47605 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.549618   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:31.549638   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.553457   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553713   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.553744   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553798   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.553995   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.554131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.554284   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.693230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:31.713818   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.718654   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:31.718682   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:31.734681   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.767394   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:31.767424   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:31.884424   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:31.884443   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:31.961893   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:32.055887   47605 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-299839" context rescaled to 1 replicas
	I0626 20:52:32.055933   47605 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:32.058697   47605 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:32.530480   47779 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.504525 seconds
	I0626 20:52:32.530633   47779 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:32.556112   47779 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:33.096104   47779 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:33.096372   47779 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-473235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:33.615425   47779 kubeadm.go:322] [bootstrap-token] Using token: fvy9dh.hbeabw0ufqdnf2rd
	I0626 20:52:33.617480   47779 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:33.617622   47779 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:33.630158   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:33.641973   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:33.649480   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:33.657736   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:33.663093   47779 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:33.698108   47779 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:34.017843   47779 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:34.069498   47779 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:34.070500   47779 kubeadm.go:322] 
	I0626 20:52:34.070587   47779 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:34.070600   47779 kubeadm.go:322] 
	I0626 20:52:34.070691   47779 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:34.070705   47779 kubeadm.go:322] 
	I0626 20:52:34.070734   47779 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:34.070809   47779 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:34.070915   47779 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:34.070952   47779 kubeadm.go:322] 
	I0626 20:52:34.071047   47779 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:34.071060   47779 kubeadm.go:322] 
	I0626 20:52:34.071114   47779 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:34.071124   47779 kubeadm.go:322] 
	I0626 20:52:34.071183   47779 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:34.071276   47779 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:34.071360   47779 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:34.071369   47779 kubeadm.go:322] 
	I0626 20:52:34.071454   47779 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:34.071550   47779 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:34.071558   47779 kubeadm.go:322] 
	I0626 20:52:34.071677   47779 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.071823   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:34.071852   47779 kubeadm.go:322] 	--control-plane 
	I0626 20:52:34.071860   47779 kubeadm.go:322] 
	I0626 20:52:34.071961   47779 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:34.071973   47779 kubeadm.go:322] 
	I0626 20:52:34.072075   47779 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.072202   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:34.072734   47779 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:34.072775   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:52:34.072794   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:34.074659   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:32.060653   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:33.969636   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.276366101s)
	I0626 20:52:33.969679   47605 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:34.114443   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.400580422s)
	I0626 20:52:34.114587   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114636   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114483   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.379765696s)
	I0626 20:52:34.114695   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114993   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115036   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115049   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.115059   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.115068   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.115386   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115394   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115458   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117682   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.117720   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.117736   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117754   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.117764   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.119184   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.119204   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.119218   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.119238   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.119253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.120750   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.120787   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.120800   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.800635   47605 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.739945617s)
	I0626 20:52:34.800672   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838732117s)
	I0626 20:52:34.800721   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.800740   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.800674   47605 node_ready.go:35] waiting up to 6m0s for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.801059   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.801086   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.801103   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.801112   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.802733   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.802767   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.802781   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.802798   47605 addons.go:464] Verifying addon metrics-server=true in "embed-certs-299839"
	I0626 20:52:34.804616   47605 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:52:34.076233   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:34.097578   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:34.126294   47779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:34.126351   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.126361   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=default-k8s-diff-port-473235 minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.672738   47779 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:34.672886   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:32.196979   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.198202   47309 pod_ready.go:97] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198243   47309 pod_ready.go:81] duration metric: took 10.521748073s waiting for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:34.198256   47309 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198265   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208718   47309 pod_ready.go:92] pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.208751   47309 pod_ready.go:81] duration metric: took 10.474456ms waiting for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208765   47309 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216757   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.216787   47309 pod_ready.go:81] duration metric: took 8.014039ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216800   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226840   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.226862   47309 pod_ready.go:81] duration metric: took 10.054474ms waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226875   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234229   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.234252   47309 pod_ready.go:81] duration metric: took 7.369366ms waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234265   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603958   47309 pod_ready.go:92] pod "kube-proxy-jhz99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.603985   47309 pod_ready.go:81] duration metric: took 369.712585ms waiting for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603999   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.992990   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.993018   47309 pod_ready.go:81] duration metric: took 389.011206ms waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.993033   47309 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:33.991358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:36.489561   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.806005   47605 addons.go:499] enable addons completed in 3.357208024s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:52:34.826098   47605 node_ready.go:49] node "embed-certs-299839" has status "Ready":"True"
	I0626 20:52:34.826123   47605 node_ready.go:38] duration metric: took 25.328707ms waiting for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.826131   47605 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:34.853293   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388894   47605 pod_ready.go:92] pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.388921   47605 pod_ready.go:81] duration metric: took 1.535604079s waiting for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388931   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397936   47605 pod_ready.go:92] pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.397962   47605 pod_ready.go:81] duration metric: took 9.024703ms waiting for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397978   47605 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409066   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.409098   47605 pod_ready.go:81] duration metric: took 11.112746ms waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409111   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419292   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.419313   47605 pod_ready.go:81] duration metric: took 10.193966ms waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419322   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429116   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.429140   47605 pod_ready.go:81] duration metric: took 9.812044ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429154   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316268   47605 pod_ready.go:92] pod "kube-proxy-scfwr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.316318   47605 pod_ready.go:81] duration metric: took 887.155494ms waiting for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316334   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605351   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.605394   47605 pod_ready.go:81] duration metric: took 289.052198ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605409   47605 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:35.287764   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:35.787902   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.287089   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.786922   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.287932   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.787255   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.287820   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.786891   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.287467   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.787282   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.400022   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:39.401566   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:41.404969   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:38.491696   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.990293   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.013927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:42.518436   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.287734   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:40.786949   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.287187   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.787722   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.287098   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.787623   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.287242   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.787224   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.287339   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.787760   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.287273   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.787052   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.287810   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.436665   47779 kubeadm.go:1081] duration metric: took 12.310369141s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:46.436696   47779 kubeadm.go:406] StartCluster complete in 5m23.972219662s
	I0626 20:52:46.436715   47779 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.436798   47779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:46.438623   47779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.438897   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:46.439016   47779 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:46.439110   47779 addons.go:66] Setting storage-provisioner=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439117   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:46.439128   47779 addons.go:66] Setting default-storageclass=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439166   47779 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-473235"
	I0626 20:52:46.439128   47779 addons.go:228] Setting addon storage-provisioner=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439240   47779 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:46.439285   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439133   47779 addons.go:66] Setting metrics-server=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439336   47779 addons.go:228] Setting addon metrics-server=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439346   47779 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:46.439383   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439663   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439691   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439694   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439717   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439733   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439754   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.456038   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0626 20:52:46.456227   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0626 20:52:46.456533   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.456769   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.457072   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457092   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457258   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457280   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457413   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457749   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457902   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.459751   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0626 20:52:46.460296   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.460326   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.460688   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.462951   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.462975   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.463384   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.463981   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.464006   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.477368   47779 addons.go:228] Setting addon default-storageclass=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.477472   47779 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:46.477516   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.477987   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.478063   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.479865   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0626 20:52:46.480358   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.480932   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.480951   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.481335   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.482608   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0626 20:52:46.482630   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.482982   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.483505   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.483521   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.483907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.484123   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.485234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.487634   47779 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:46.486430   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.488916   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:46.488938   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:46.488959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.490698   47779 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:43.900514   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.900540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:43.488701   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.992735   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:46.491860   47779 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.491875   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:46.491893   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.492950   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.493834   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.493855   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.494361   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.494827   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.494987   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.495130   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.496109   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.496170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496192   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.496213   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496294   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.496444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.496549   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.502119   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0626 20:52:46.502456   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.502898   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.502916   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.503203   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.503723   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.503747   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.522597   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0626 20:52:46.523240   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.523892   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.523912   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.524423   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.524674   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.526567   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.528682   47779 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.528699   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:46.528721   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.531983   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532450   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.532477   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532785   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.533992   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.534158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.534302   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.698636   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:46.819666   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.915074   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.918133   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:46.918161   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:47.006856   47779 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-473235" context rescaled to 1 replicas
	I0626 20:52:47.006907   47779 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:47.008746   47779 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:45.013051   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.014722   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.010273   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:47.015003   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:47.015022   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:47.099554   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:47.099583   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:47.162192   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:48.848078   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.149396252s)
	I0626 20:52:48.848110   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.028412306s)
	I0626 20:52:48.848145   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848157   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848112   47779 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:48.848418   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848438   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848440   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848448   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848460   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848678   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848699   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848712   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848715   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848722   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848936   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848959   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.142482   47779 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.13217662s)
	I0626 20:52:49.142522   47779 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.142664   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.227563186s)
	I0626 20:52:49.142706   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.142723   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143018   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143037   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143047   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.143055   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.143309   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143402   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143378   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.230635   47779 node_ready.go:49] node "default-k8s-diff-port-473235" has status "Ready":"True"
	I0626 20:52:49.230663   47779 node_ready.go:38] duration metric: took 88.12938ms waiting for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.230688   47779 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:49.248094   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:49.857182   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694948259s)
	I0626 20:52:49.857243   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857254   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857552   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857569   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857579   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857588   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857816   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857836   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857847   47779 addons.go:464] Verifying addon metrics-server=true in "default-k8s-diff-port-473235"
	I0626 20:52:49.859648   47779 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:49.860902   47779 addons.go:499] enable addons completed in 3.421885216s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:47.901422   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.402347   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:48.490248   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.991228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.082154   46683 pod_ready.go:81] duration metric: took 4m0.000473504s waiting for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:51.082180   46683 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:52:51.082198   46683 pod_ready.go:38] duration metric: took 4m1.199581008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:51.082227   46683 kubeadm.go:640] restartCluster took 5m4.421255564s
	W0626 20:52:51.082286   46683 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:52:51.082319   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:52:50.897742   47779 pod_ready.go:92] pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.897765   47779 pod_ready.go:81] duration metric: took 1.649649958s waiting for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.897777   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.924988   47779 pod_ready.go:92] pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.925007   47779 pod_ready.go:81] duration metric: took 27.222965ms waiting for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.925016   47779 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942760   47779 pod_ready.go:92] pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.942781   47779 pod_ready.go:81] duration metric: took 17.75819ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942790   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956204   47779 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.956224   47779 pod_ready.go:81] duration metric: took 13.428405ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956235   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964542   47779 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.964569   47779 pod_ready.go:81] duration metric: took 8.32705ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964581   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791355   47779 pod_ready.go:92] pod "kube-proxy-k4hzc" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:51.791376   47779 pod_ready.go:81] duration metric: took 826.787812ms waiting for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791384   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078670   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:52.078700   47779 pod_ready.go:81] duration metric: took 287.306474ms waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078714   47779 pod_ready.go:38] duration metric: took 2.848014299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:52.078733   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:52:52.078789   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:52:52.094414   47779 api_server.go:72] duration metric: took 5.08747775s to wait for apiserver process to appear ...
	I0626 20:52:52.094444   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:52:52.094468   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:52:52.101300   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:52:52.102682   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:52:52.102703   47779 api_server.go:131] duration metric: took 8.250707ms to wait for apiserver health ...
	I0626 20:52:52.102712   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:52:52.283428   47779 system_pods.go:59] 9 kube-system pods found
	I0626 20:52:52.283459   47779 system_pods.go:61] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.283467   47779 system_pods.go:61] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.283474   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.283482   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.283488   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.283493   47779 system_pods.go:61] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.283500   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.283511   47779 system_pods.go:61] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.283519   47779 system_pods.go:61] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.283527   47779 system_pods.go:74] duration metric: took 180.810034ms to wait for pod list to return data ...
	I0626 20:52:52.283540   47779 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:52:52.478374   47779 default_sa.go:45] found service account: "default"
	I0626 20:52:52.478400   47779 default_sa.go:55] duration metric: took 194.853163ms for default service account to be created ...
	I0626 20:52:52.478418   47779 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:52:52.683697   47779 system_pods.go:86] 9 kube-system pods found
	I0626 20:52:52.683724   47779 system_pods.go:89] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.683730   47779 system_pods.go:89] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.683735   47779 system_pods.go:89] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.683740   47779 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.683745   47779 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.683748   47779 system_pods.go:89] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.683752   47779 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.683761   47779 system_pods.go:89] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.683773   47779 system_pods.go:89] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.683789   47779 system_pods.go:126] duration metric: took 205.364587ms to wait for k8s-apps to be running ...
	I0626 20:52:52.683798   47779 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:52:52.683846   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:52.698439   47779 system_svc.go:56] duration metric: took 14.634482ms WaitForService to wait for kubelet.
	I0626 20:52:52.698463   47779 kubeadm.go:581] duration metric: took 5.691531199s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:52:52.698480   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:52:52.879414   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:52:52.879441   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:52:52.879454   47779 node_conditions.go:105] duration metric: took 180.969761ms to run NodePressure ...
	I0626 20:52:52.879466   47779 start.go:228] waiting for startup goroutines ...
	I0626 20:52:52.879473   47779 start.go:233] waiting for cluster config update ...
	I0626 20:52:52.879484   47779 start.go:242] writing updated cluster config ...
	I0626 20:52:52.879748   47779 ssh_runner.go:195] Run: rm -f paused
	I0626 20:52:52.928982   47779 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:52:52.930701   47779 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-473235" cluster and "default" namespace by default
	I0626 20:52:49.513843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.515851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:54.013443   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:52.901965   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:55.400541   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:56.014186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:58.516445   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:57.900857   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:59.901944   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:01.013089   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:03.015510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:02.400534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:04.400691   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:06.401897   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:05.513529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.013510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.901751   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:11.400891   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:10.513562   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:12.515529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:13.900503   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:15.900570   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:14.208647   46683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.126299276s)
	I0626 20:53:14.208727   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:14.222919   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:53:14.234762   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:53:14.244800   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:53:14.244840   46683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0626 20:53:14.465786   46683 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:53:15.014781   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.017400   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.901367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:20.401697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:19.515459   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.015763   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.900407   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:24.901270   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.255771   46683 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0626 20:53:27.255867   46683 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:53:27.255968   46683 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:53:27.256115   46683 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:53:27.256237   46683 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:53:27.256368   46683 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:53:27.256489   46683 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:53:27.256550   46683 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0626 20:53:27.256604   46683 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:53:27.258050   46683 out.go:204]   - Generating certificates and keys ...
	I0626 20:53:27.258140   46683 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:53:27.258235   46683 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:53:27.258357   46683 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:53:27.258441   46683 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:53:27.258554   46683 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:53:27.258611   46683 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:53:27.258665   46683 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:53:27.258737   46683 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:53:27.258832   46683 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:53:27.258907   46683 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:53:27.258954   46683 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:53:27.259034   46683 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:53:27.259106   46683 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:53:27.259170   46683 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:53:27.259247   46683 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:53:27.259325   46683 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:53:27.259410   46683 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:53:27.260969   46683 out.go:204]   - Booting up control plane ...
	I0626 20:53:27.261074   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:53:27.261181   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:53:27.261257   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:53:27.261341   46683 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:53:27.261496   46683 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:53:27.261599   46683 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003012 seconds
	I0626 20:53:27.261709   46683 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:53:27.261854   46683 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:53:27.261940   46683 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:53:27.262112   46683 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-490377 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 20:53:27.262210   46683 kubeadm.go:322] [bootstrap-token] Using token: 9pdj92.0ssfpvr0ns0ww3t3
	I0626 20:53:27.263670   46683 out.go:204]   - Configuring RBAC rules ...
	I0626 20:53:27.263769   46683 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:53:27.263903   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:53:27.264029   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:53:27.264172   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:53:27.264278   46683 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:53:27.264333   46683 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:53:27.264372   46683 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:53:27.264379   46683 kubeadm.go:322] 
	I0626 20:53:27.264445   46683 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:53:27.264454   46683 kubeadm.go:322] 
	I0626 20:53:27.264557   46683 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:53:27.264568   46683 kubeadm.go:322] 
	I0626 20:53:27.264598   46683 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:53:27.264668   46683 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:53:27.264715   46683 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:53:27.264721   46683 kubeadm.go:322] 
	I0626 20:53:27.264769   46683 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:53:27.264846   46683 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:53:27.264934   46683 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:53:27.264943   46683 kubeadm.go:322] 
	I0626 20:53:27.265038   46683 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0626 20:53:27.265101   46683 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:53:27.265107   46683 kubeadm.go:322] 
	I0626 20:53:27.265171   46683 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265269   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:53:27.265292   46683 kubeadm.go:322]     --control-plane 	  
	I0626 20:53:27.265298   46683 kubeadm.go:322] 
	I0626 20:53:27.265439   46683 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:53:27.265451   46683 kubeadm.go:322] 
	I0626 20:53:27.265581   46683 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265739   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:53:27.265753   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:53:27.265765   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:53:27.267293   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:53:24.515093   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.014403   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.401630   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:29.404203   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.268439   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:53:27.281135   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:53:27.304145   46683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:53:27.304275   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=old-k8s-version-490377 minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.304277   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.555789   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.571040   46683 ops.go:34] apiserver oom_adj: -16
	I0626 20:53:28.180843   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:28.681089   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.180441   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.680355   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.180860   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.680971   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.181088   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.680352   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.516069   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.013135   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.013391   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:31.901777   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.400314   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:36.400967   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.180338   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:32.680389   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.180568   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.681010   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.180575   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.680905   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.180640   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.680412   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.181081   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.680836   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.514263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:39.013193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:38.900309   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:40.900622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:37.181178   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:37.680710   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.180280   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.680304   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.681177   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.180431   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.681031   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.180847   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.681058   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.680883   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.800538   46683 kubeadm.go:1081] duration metric: took 15.496322508s to wait for elevateKubeSystemPrivileges.
	I0626 20:53:42.800568   46683 kubeadm.go:406] StartCluster complete in 5m56.189450192s
	I0626 20:53:42.800584   46683 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.800661   46683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:53:42.802530   46683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.802755   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:53:42.802810   46683 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:53:42.802908   46683 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802926   46683 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-490377"
	W0626 20:53:42.802936   46683 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:53:42.802934   46683 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802953   46683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-490377"
	I0626 20:53:42.802972   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:53:42.802983   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.802974   46683 addons.go:66] Setting metrics-server=true in profile "old-k8s-version-490377"
	I0626 20:53:42.803034   46683 addons.go:228] Setting addon metrics-server=true in "old-k8s-version-490377"
	W0626 20:53:42.803048   46683 addons.go:237] addon metrics-server should already be in state true
	I0626 20:53:42.803155   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.803353   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803394   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803437   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803468   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803563   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803607   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.822676   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0626 20:53:42.822891   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0626 20:53:42.823127   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823221   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0626 20:53:42.823284   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823599   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823763   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823771   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823783   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.823790   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824056   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.824082   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824096   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824141   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824310   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.824408   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824656   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824682   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.824924   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824954   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.839635   46683 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-490377"
	W0626 20:53:42.839655   46683 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:53:42.839675   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.840131   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.840171   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.846479   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0626 20:53:42.847180   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.847711   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.847728   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.848194   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.848454   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.848519   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0626 20:53:42.850321   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.850427   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.852331   46683 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:53:42.851252   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.853522   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.853581   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:53:42.853603   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:53:42.853625   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.854082   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.854292   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.856641   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.858158   46683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:53:42.857809   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.859467   46683 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:42.859485   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:53:42.859500   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.859505   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.859528   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.858223   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.858466   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0626 20:53:42.860179   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.860331   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.860421   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.860783   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.860909   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.860923   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.861642   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.862199   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.862246   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.863700   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864103   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.864124   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864413   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.864598   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.864737   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.864867   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.878470   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0626 20:53:42.878961   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.879500   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.879510   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.879860   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.880063   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.881757   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.882028   46683 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:42.882040   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:53:42.882054   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.887689   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.887749   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.887779   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887888   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.888058   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.888203   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.981495   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:53:43.064530   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:53:43.064554   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:53:43.074105   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:43.091876   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:43.132074   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:53:43.132095   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:53:43.219103   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.219133   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:53:43.285081   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.443796   46683 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-490377" context rescaled to 1 replicas
	I0626 20:53:43.443841   46683 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:53:43.445639   46683 out.go:177] * Verifying Kubernetes components...
	I0626 20:53:41.014279   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.515278   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.447458   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:43.642242   46683 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0626 20:53:44.194901   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.102988033s)
	I0626 20:53:44.194990   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195008   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.194932   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120793889s)
	I0626 20:53:44.195085   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195096   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195452   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195466   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195475   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195486   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195493   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195518   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195529   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195714   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195765   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195774   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195816   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195893   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195905   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195922   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195936   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.196171   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.196190   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.196197   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.260680   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.260703   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.260706   46683 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.261103   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261122   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261134   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.261144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.261146   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.261364   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261386   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261396   46683 addons.go:464] Verifying addon metrics-server=true in "old-k8s-version-490377"
	I0626 20:53:44.262936   46683 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:53:42.901604   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.902182   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.264049   46683 addons.go:499] enable addons completed in 1.461244367s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:53:44.318103   46683 node_ready.go:49] node "old-k8s-version-490377" has status "Ready":"True"
	I0626 20:53:44.318135   46683 node_ready.go:38] duration metric: took 57.40895ms waiting for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.318147   46683 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:44.333409   46683 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:46.345926   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:46.015128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.516066   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:47.400802   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:49.901066   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.347529   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:50.847639   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:51.012404   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.012697   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:52.400326   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:54.400932   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.402434   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.345907   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:55.345824   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.345850   46683 pod_ready.go:81] duration metric: took 11.012408828s waiting for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.345858   46683 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350198   46683 pod_ready.go:92] pod "kube-proxy-m7hz7" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.350214   46683 pod_ready.go:81] duration metric: took 4.351274ms waiting for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350222   46683 pod_ready.go:38] duration metric: took 11.032065043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:55.350236   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:53:55.350285   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:53:55.366478   46683 api_server.go:72] duration metric: took 11.922600619s to wait for apiserver process to appear ...
	I0626 20:53:55.366501   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:53:55.366518   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:53:55.373257   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:53:55.374362   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:53:55.374382   46683 api_server.go:131] duration metric: took 7.874169ms to wait for apiserver health ...
	I0626 20:53:55.374390   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:53:55.377704   46683 system_pods.go:59] 4 kube-system pods found
	I0626 20:53:55.377719   46683 system_pods.go:61] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.377724   46683 system_pods.go:61] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.377744   46683 system_pods.go:61] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.377754   46683 system_pods.go:61] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.377759   46683 system_pods.go:74] duration metric: took 3.35753ms to wait for pod list to return data ...
	I0626 20:53:55.377765   46683 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:53:55.379628   46683 default_sa.go:45] found service account: "default"
	I0626 20:53:55.379641   46683 default_sa.go:55] duration metric: took 1.87263ms for default service account to be created ...
	I0626 20:53:55.379647   46683 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:53:55.382155   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.382171   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.382176   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.382183   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.382189   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.382204   46683 retry.go:31] will retry after 310.903974ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.698587   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.698613   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.698618   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.698625   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.698631   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.698646   46683 retry.go:31] will retry after 300.100433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.005356   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.005397   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.005408   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.005419   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.005427   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.005446   46683 retry.go:31] will retry after 407.352435ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.417879   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.417905   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.417910   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.417916   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.417922   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.417935   46683 retry.go:31] will retry after 483.508514ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.013247   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:57.015631   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:58.900650   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.401491   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.906260   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.906282   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.906287   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.906293   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.906301   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.906319   46683 retry.go:31] will retry after 527.167542ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:57.438949   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:57.438985   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:57.438995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:57.439006   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:57.439019   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:57.439038   46683 retry.go:31] will retry after 902.255612ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:58.346131   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:58.346161   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:58.346166   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:58.346173   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:58.346179   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:58.346192   46683 retry.go:31] will retry after 904.271086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.256458   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:59.256489   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:59.256497   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:59.256509   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:59.256517   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:59.256534   46683 retry.go:31] will retry after 1.069634228s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:00.331828   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:00.331858   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:00.331865   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:00.331873   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:00.331879   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:00.331896   46683 retry.go:31] will retry after 1.418598639s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:01.755104   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:01.755131   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:01.755136   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:01.755143   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:01.755149   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:01.755162   46683 retry.go:31] will retry after 1.624135654s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.514847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.515086   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.900425   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:05.900854   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.385085   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:03.385111   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:03.385116   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:03.385122   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:03.385128   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:03.385142   46683 retry.go:31] will retry after 1.861818901s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:05.251844   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:05.251870   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:05.251875   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:05.251882   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:05.251888   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:05.251901   46683 retry.go:31] will retry after 3.23679019s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:06.013786   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.514493   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.399542   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:10.400928   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.494644   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:08.494669   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:08.494674   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:08.494681   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:08.494687   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:08.494700   46683 retry.go:31] will retry after 4.210335189s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:10.514707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.515079   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.415273   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:14.899807   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.709730   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:12.709754   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:12.709759   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:12.709765   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:12.709771   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:12.709785   46683 retry.go:31] will retry after 4.208864521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:15.012766   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:17.012807   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.014851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.901107   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.400540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:21.402204   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.923625   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:16.923654   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:16.923662   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:16.923673   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:16.923682   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:16.923701   46683 retry.go:31] will retry after 6.417296046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:21.514829   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.515117   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.402546   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:25.903195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.347074   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:23.347099   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:23.347105   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Pending
	I0626 20:54:23.347108   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:23.347115   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:23.347121   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:23.347133   46683 retry.go:31] will retry after 7.108155838s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:26.013263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.013708   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.399697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.401036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.460927   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:30.460950   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:30.460955   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:30.460995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:30.461004   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:30.461014   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:30.461027   46683 retry.go:31] will retry after 9.756193162s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:30.514139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.514334   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:34.901064   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:35.013362   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.013815   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.014126   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.400945   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:40.223985   46683 system_pods.go:86] 7 kube-system pods found
	I0626 20:54:40.224009   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:40.224014   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Pending
	I0626 20:54:40.224018   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Pending
	I0626 20:54:40.224022   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:40.224026   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:40.224032   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:40.224037   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:40.224052   46683 retry.go:31] will retry after 8.963386657s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:41.515388   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:44.015053   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:41.900424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:43.901263   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.400098   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.514128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.013743   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.195390   46683 system_pods.go:86] 8 kube-system pods found
	I0626 20:54:49.195416   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:49.195421   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Running
	I0626 20:54:49.195426   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Running
	I0626 20:54:49.195430   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:49.195434   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:49.195438   46683 system_pods.go:89] "kube-scheduler-old-k8s-version-490377" [c6fe04b8-d037-452b-bf41-3719c032b7ef] Running
	I0626 20:54:49.195444   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:49.195450   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:49.195458   46683 system_pods.go:126] duration metric: took 53.81580645s to wait for k8s-apps to be running ...
	I0626 20:54:49.195466   46683 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:54:49.195518   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:54:49.219014   46683 system_svc.go:56] duration metric: took 23.534309ms WaitForService to wait for kubelet.
	I0626 20:54:49.219049   46683 kubeadm.go:581] duration metric: took 1m5.775176119s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:54:49.219089   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:54:49.223397   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:54:49.223426   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:54:49.223438   46683 node_conditions.go:105] duration metric: took 4.339435ms to run NodePressure ...
	I0626 20:54:49.223452   46683 start.go:228] waiting for startup goroutines ...
	I0626 20:54:49.223461   46683 start.go:233] waiting for cluster config update ...
	I0626 20:54:49.223472   46683 start.go:242] writing updated cluster config ...
	I0626 20:54:49.223798   46683 ssh_runner.go:195] Run: rm -f paused
	I0626 20:54:49.277613   46683 start.go:652] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0626 20:54:49.279501   46683 out.go:177] 
	W0626 20:54:49.280841   46683 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0626 20:54:49.282249   46683 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0626 20:54:49.283695   46683 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-490377" cluster and "default" namespace by default
	I0626 20:54:48.401602   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:50.900375   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:51.514071   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.013330   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:52.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.900946   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.013501   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:58.014748   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.901531   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:59.401822   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:00.016725   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:02.514316   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:01.902698   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:04.400011   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:06.402149   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:05.014536   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:07.514975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:08.900297   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.900463   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.013780   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:12.514823   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:13.399907   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.400044   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.014032   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.515161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.907245   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.400962   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.015074   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.514465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.403366   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.900247   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.514993   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.012592   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.013612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.400192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.401917   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.402240   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.015647   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.513844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.900187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.902063   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.514657   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:37.514888   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:38.400753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.902398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.014755   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:42.514599   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:43.401280   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:45.902265   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:44.521736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.016422   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.902334   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:50.400765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:49.515570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.014736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.900293   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.900572   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.514047   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.013346   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.013409   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.400170   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.401528   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.013946   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:03.014845   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.902597   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:04.401919   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:05.514639   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:08.016797   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:06.901493   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:09.400229   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:11.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:10.513478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:12.514938   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:13.403138   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.901738   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.013852   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:17.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:18.400812   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.401025   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.013522   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.015651   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.016747   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.401212   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.401675   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.515343   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:28.515706   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.902301   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:29.401779   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.012844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:33.013826   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.901622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.403688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.993256   47309 pod_ready.go:81] duration metric: took 4m0.000204736s waiting for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:34.993309   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:34.993324   47309 pod_ready.go:38] duration metric: took 4m11.355630262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:34.993352   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:34.993410   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:34.993484   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:35.038316   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.038342   47309 cri.go:89] found id: ""
	I0626 20:56:35.038352   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:35.038414   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.042851   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:35.042914   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:35.076892   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.076925   47309 cri.go:89] found id: ""
	I0626 20:56:35.076934   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:35.076990   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.081850   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:35.081933   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:35.119872   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.119896   47309 cri.go:89] found id: ""
	I0626 20:56:35.119904   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:35.119971   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.124661   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:35.124731   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:35.158899   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.158924   47309 cri.go:89] found id: ""
	I0626 20:56:35.158933   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:35.158991   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.163512   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:35.163587   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:35.195698   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.195721   47309 cri.go:89] found id: ""
	I0626 20:56:35.195729   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:35.195786   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.199883   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:35.199935   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:35.243909   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.243932   47309 cri.go:89] found id: ""
	I0626 20:56:35.243939   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:35.243992   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.248331   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:35.248388   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:35.287985   47309 cri.go:89] found id: ""
	I0626 20:56:35.288009   47309 logs.go:284] 0 containers: []
	W0626 20:56:35.288019   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:35.288026   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:35.288085   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:35.324050   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.324129   47309 cri.go:89] found id: ""
	I0626 20:56:35.324151   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:35.324219   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.328564   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:35.328588   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:35.369968   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:35.369997   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:35.391588   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:35.391615   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:35.542328   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:35.542356   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.579140   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:35.579172   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.635428   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:35.635463   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.674715   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:35.674750   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.732788   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:35.732837   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.774860   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:35.774901   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:35.881082   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:35.881118   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.929445   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:35.929478   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.968723   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:35.968754   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:35.015798   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.514548   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.606375   47605 pod_ready.go:81] duration metric: took 4m0.000950536s waiting for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:37.606403   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:37.606412   47605 pod_ready.go:38] duration metric: took 4m2.78027212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:37.606429   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:37.606459   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:37.606521   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:37.668350   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:37.668383   47605 cri.go:89] found id: ""
	I0626 20:56:37.668391   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:37.668453   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.675583   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:37.675669   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:37.710826   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:37.710852   47605 cri.go:89] found id: ""
	I0626 20:56:37.710860   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:37.710916   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.715610   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:37.715671   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:37.751709   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:37.751784   47605 cri.go:89] found id: ""
	I0626 20:56:37.751812   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:37.751877   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.757177   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:37.757241   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:37.790384   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:37.790413   47605 cri.go:89] found id: ""
	I0626 20:56:37.790420   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:37.790468   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.795294   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:37.795352   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:37.832125   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:37.832157   47605 cri.go:89] found id: ""
	I0626 20:56:37.832168   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:37.832239   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.836762   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:37.836816   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:37.877789   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:37.877817   47605 cri.go:89] found id: ""
	I0626 20:56:37.877827   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:37.877887   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.885276   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:37.885348   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:37.929701   47605 cri.go:89] found id: ""
	I0626 20:56:37.929731   47605 logs.go:284] 0 containers: []
	W0626 20:56:37.929745   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:37.929755   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:37.929815   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:37.970177   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:37.970201   47605 cri.go:89] found id: ""
	I0626 20:56:37.970211   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:37.970270   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.975002   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:37.975025   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:38.022831   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:38.022862   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:38.058414   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:38.058446   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:38.168689   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:38.168726   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:38.183930   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:38.183959   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:38.224623   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:38.224653   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:38.271164   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:38.271205   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:38.308365   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:38.308391   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:38.363321   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:38.363356   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:38.510275   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:38.510310   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:38.552512   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:38.552544   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:38.586122   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:38.586155   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:38.945144   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:38.962999   47309 api_server.go:72] duration metric: took 4m18.467522928s to wait for apiserver process to appear ...
	I0626 20:56:38.963026   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:38.963067   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:38.963129   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:39.002109   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.002133   47309 cri.go:89] found id: ""
	I0626 20:56:39.002141   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:39.002198   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.006799   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:39.006864   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:39.042531   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:39.042556   47309 cri.go:89] found id: ""
	I0626 20:56:39.042566   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:39.042621   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.047228   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:39.047301   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:39.080810   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.080842   47309 cri.go:89] found id: ""
	I0626 20:56:39.080850   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:39.080916   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.085173   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:39.085238   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:39.116857   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:39.116886   47309 cri.go:89] found id: ""
	I0626 20:56:39.116895   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:39.116946   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.121912   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:39.122007   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:39.166886   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.166912   47309 cri.go:89] found id: ""
	I0626 20:56:39.166920   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:39.166972   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.171344   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:39.171420   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:39.205333   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:39.205358   47309 cri.go:89] found id: ""
	I0626 20:56:39.205366   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:39.205445   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.211414   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:39.211491   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:39.249068   47309 cri.go:89] found id: ""
	I0626 20:56:39.249092   47309 logs.go:284] 0 containers: []
	W0626 20:56:39.249103   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:39.249110   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:39.249171   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:39.283295   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.283314   47309 cri.go:89] found id: ""
	I0626 20:56:39.283325   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:39.283372   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.287514   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:39.287537   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:39.420720   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:39.420752   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.479018   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:39.479052   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.512285   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:39.512313   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.549886   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:39.549922   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.590619   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:39.590647   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:40.076597   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:40.076642   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:40.092551   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:40.092581   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:40.135655   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:40.135699   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:40.184590   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:40.184628   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:40.238354   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:40.238393   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:40.283033   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:40.283075   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:41.567686   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:41.584431   47605 api_server.go:72] duration metric: took 4m9.528462616s to wait for apiserver process to appear ...
	I0626 20:56:41.584462   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:41.584492   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:41.584553   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:41.622027   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:41.622051   47605 cri.go:89] found id: ""
	I0626 20:56:41.622061   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:41.622119   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.626209   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:41.626271   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:41.658658   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:41.658680   47605 cri.go:89] found id: ""
	I0626 20:56:41.658689   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:41.658746   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.666357   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:41.666437   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:41.702344   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:41.702369   47605 cri.go:89] found id: ""
	I0626 20:56:41.702378   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:41.702443   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.706706   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:41.706775   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:41.743534   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:41.743554   47605 cri.go:89] found id: ""
	I0626 20:56:41.743561   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:41.743619   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.748338   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:41.748408   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:41.780299   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:41.780324   47605 cri.go:89] found id: ""
	I0626 20:56:41.780333   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:41.780392   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.785308   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:41.785395   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:41.819335   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:41.819361   47605 cri.go:89] found id: ""
	I0626 20:56:41.819370   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:41.819415   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.823767   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:41.823832   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:41.855049   47605 cri.go:89] found id: ""
	I0626 20:56:41.855079   47605 logs.go:284] 0 containers: []
	W0626 20:56:41.855088   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:41.855094   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:41.855147   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:41.886378   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:41.886400   47605 cri.go:89] found id: ""
	I0626 20:56:41.886408   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:41.886459   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.891748   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:41.891777   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:42.003933   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:42.003968   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:42.018182   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:42.018230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:42.145038   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:42.145074   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:42.181403   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:42.181438   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:42.224428   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:42.224467   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:42.260067   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:42.260097   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:42.312924   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:42.312972   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:42.347173   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:42.347203   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:42.920689   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:42.920725   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:42.970428   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:42.970456   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:43.021561   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.021587   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:42.886551   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:56:42.892462   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:56:42.894253   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:42.894277   47309 api_server.go:131] duration metric: took 3.931242905s to wait for apiserver health ...
	I0626 20:56:42.894286   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:42.894309   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:42.894364   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:42.931699   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:42.931728   47309 cri.go:89] found id: ""
	I0626 20:56:42.931736   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:42.931792   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.936873   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:42.936944   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:42.968701   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:42.968720   47309 cri.go:89] found id: ""
	I0626 20:56:42.968727   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:42.968778   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.974309   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:42.974381   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:43.010388   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:43.010416   47309 cri.go:89] found id: ""
	I0626 20:56:43.010425   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:43.010482   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.015524   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:43.015582   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:43.049074   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.049103   47309 cri.go:89] found id: ""
	I0626 20:56:43.049112   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:43.049173   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.053750   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:43.053814   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:43.096699   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:43.096727   47309 cri.go:89] found id: ""
	I0626 20:56:43.096734   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:43.096776   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.101210   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:43.101264   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:43.133316   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:43.133344   47309 cri.go:89] found id: ""
	I0626 20:56:43.133354   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:43.133420   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.138226   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:43.138289   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:43.169863   47309 cri.go:89] found id: ""
	I0626 20:56:43.169896   47309 logs.go:284] 0 containers: []
	W0626 20:56:43.169903   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:43.169908   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:43.169962   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:43.201859   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.201884   47309 cri.go:89] found id: ""
	I0626 20:56:43.201892   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:43.201942   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.207043   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:43.207072   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.264723   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:43.264755   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.301988   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.302016   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:43.344103   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:43.344132   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:43.357414   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:43.357445   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:43.486425   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:43.486453   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:43.529205   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:43.529239   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:43.575311   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:43.575344   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:44.074749   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:44.074790   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:44.184946   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:44.184987   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:44.221993   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:44.222028   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:44.263095   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:44.263127   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:46.817987   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:46.818014   47309 system_pods.go:61] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.818019   47309 system_pods.go:61] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.818023   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.818027   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.818031   47309 system_pods.go:61] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.818035   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.818041   47309 system_pods.go:61] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.818047   47309 system_pods.go:61] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.818052   47309 system_pods.go:74] duration metric: took 3.923762125s to wait for pod list to return data ...
	I0626 20:56:46.818061   47309 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:46.821789   47309 default_sa.go:45] found service account: "default"
	I0626 20:56:46.821811   47309 default_sa.go:55] duration metric: took 3.746079ms for default service account to be created ...
	I0626 20:56:46.821818   47309 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:46.830080   47309 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:46.830117   47309 system_pods.go:89] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.830127   47309 system_pods.go:89] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.830134   47309 system_pods.go:89] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.830141   47309 system_pods.go:89] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.830147   47309 system_pods.go:89] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.830153   47309 system_pods.go:89] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.830165   47309 system_pods.go:89] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.830178   47309 system_pods.go:89] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.830186   47309 system_pods.go:126] duration metric: took 8.363064ms to wait for k8s-apps to be running ...
	I0626 20:56:46.830198   47309 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:46.830250   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:46.851429   47309 system_svc.go:56] duration metric: took 21.223321ms WaitForService to wait for kubelet.
	I0626 20:56:46.851456   47309 kubeadm.go:581] duration metric: took 4m26.355992846s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:46.851482   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:46.856152   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:46.856177   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:46.856187   47309 node_conditions.go:105] duration metric: took 4.700595ms to run NodePressure ...
	I0626 20:56:46.856197   47309 start.go:228] waiting for startup goroutines ...
	I0626 20:56:46.856203   47309 start.go:233] waiting for cluster config update ...
	I0626 20:56:46.856212   47309 start.go:242] writing updated cluster config ...
	I0626 20:56:46.856472   47309 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:46.911414   47309 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:46.913280   47309 out.go:177] * Done! kubectl is now configured to use "no-preload-934450" cluster and "default" namespace by default
	I0626 20:56:45.561459   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:56:45.567555   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:56:45.568704   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:45.568720   47605 api_server.go:131] duration metric: took 3.984252941s to wait for apiserver health ...
	I0626 20:56:45.568728   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:45.568745   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:45.568789   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:45.608235   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:45.608261   47605 cri.go:89] found id: ""
	I0626 20:56:45.608270   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:45.608335   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.612705   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:45.612774   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:45.649330   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.649353   47605 cri.go:89] found id: ""
	I0626 20:56:45.649362   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:45.649440   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.655104   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:45.655178   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:45.699690   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.699711   47605 cri.go:89] found id: ""
	I0626 20:56:45.699722   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:45.699767   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.704455   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:45.704515   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:45.743181   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:45.743209   47605 cri.go:89] found id: ""
	I0626 20:56:45.743218   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:45.743283   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.748030   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:45.748098   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:45.787325   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:45.787352   47605 cri.go:89] found id: ""
	I0626 20:56:45.787360   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:45.787406   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.792119   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:45.792191   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:45.833192   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:45.833215   47605 cri.go:89] found id: ""
	I0626 20:56:45.833222   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:45.833279   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.838399   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:45.838464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:45.878372   47605 cri.go:89] found id: ""
	I0626 20:56:45.878403   47605 logs.go:284] 0 containers: []
	W0626 20:56:45.878410   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:45.878415   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:45.878464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:45.917051   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:45.917074   47605 cri.go:89] found id: ""
	I0626 20:56:45.917081   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:45.917125   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.921484   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:45.921508   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.962659   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:45.962699   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.993644   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:45.993674   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:46.055087   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:46.055130   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:46.574535   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:46.574581   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:46.617139   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:46.617174   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:46.729727   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:46.729768   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:46.860871   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:46.860908   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:46.922618   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:46.922657   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:46.975973   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:46.976000   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:47.017458   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:47.017488   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:47.058540   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:47.058567   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:49.582112   47605 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:49.582139   47605 system_pods.go:61] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.582145   47605 system_pods.go:61] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.582149   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.582153   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.582157   47605 system_pods.go:61] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.582163   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.582169   47605 system_pods.go:61] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.582175   47605 system_pods.go:61] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.582180   47605 system_pods.go:74] duration metric: took 4.013448182s to wait for pod list to return data ...
	I0626 20:56:49.582187   47605 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:49.588793   47605 default_sa.go:45] found service account: "default"
	I0626 20:56:49.588827   47605 default_sa.go:55] duration metric: took 6.634132ms for default service account to be created ...
	I0626 20:56:49.588836   47605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:49.596519   47605 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:49.596549   47605 system_pods.go:89] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.596555   47605 system_pods.go:89] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.596562   47605 system_pods.go:89] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.596570   47605 system_pods.go:89] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.596577   47605 system_pods.go:89] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.596585   47605 system_pods.go:89] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.596600   47605 system_pods.go:89] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.596612   47605 system_pods.go:89] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.596622   47605 system_pods.go:126] duration metric: took 7.781697ms to wait for k8s-apps to be running ...
	I0626 20:56:49.596633   47605 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:49.596684   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:49.613188   47605 system_svc.go:56] duration metric: took 16.545322ms WaitForService to wait for kubelet.
	I0626 20:56:49.613212   47605 kubeadm.go:581] duration metric: took 4m17.557252465s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:49.613231   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:49.616820   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:49.616845   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:49.616854   47605 node_conditions.go:105] duration metric: took 3.619443ms to run NodePressure ...
	I0626 20:56:49.616864   47605 start.go:228] waiting for startup goroutines ...
	I0626 20:56:49.616870   47605 start.go:233] waiting for cluster config update ...
	I0626 20:56:49.616878   47605 start.go:242] writing updated cluster config ...
	I0626 20:56:49.617126   47605 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:49.665468   47605 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:49.667447   47605 out.go:177] * Done! kubectl is now configured to use "embed-certs-299839" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:47:04 UTC, ends at Mon 2023-06-26 21:01:54 UTC. --
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.486945314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5cbf3a2c-3c31-41d9-afd5-1f229d229801 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.487225410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5cbf3a2c-3c31-41d9-afd5-1f229d229801 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.523920516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81ec162f-edcd-4e89-9476-e430c81c4b8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.524018870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=81ec162f-edcd-4e89-9476-e430c81c4b8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.524386896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81ec162f-edcd-4e89-9476-e430c81c4b8f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.561308178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42831ce4-739d-49da-8425-ffe201b69356 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.561401980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42831ce4-739d-49da-8425-ffe201b69356 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.561735494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42831ce4-739d-49da-8425-ffe201b69356 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.598991422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=16b436dc-ec35-4281-85b0-ba31fab3ef81 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.599093947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=16b436dc-ec35-4281-85b0-ba31fab3ef81 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.599369407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=16b436dc-ec35-4281-85b0-ba31fab3ef81 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.639382184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6094f632-a146-45de-9abf-9c332f0675ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.639453289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6094f632-a146-45de-9abf-9c332f0675ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.639617640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6094f632-a146-45de-9abf-9c332f0675ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.648465546Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=3f4d3cc1-cc26-4b7a-8437-6a4b314fe57d name=/runtime.v1.RuntimeService/Status
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.648531808Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=3f4d3cc1-cc26-4b7a-8437-6a4b314fe57d name=/runtime.v1.RuntimeService/Status
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.675949216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67b881ea-18f5-49ca-9b96-26d6470ca6a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.676012261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67b881ea-18f5-49ca-9b96-26d6470ca6a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.676289428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67b881ea-18f5-49ca-9b96-26d6470ca6a4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.712872807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd798331-5dd2-4576-b0da-0424f9ee8008 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.712961782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd798331-5dd2-4576-b0da-0424f9ee8008 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.713229808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd798331-5dd2-4576-b0da-0424f9ee8008 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.748308851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bd9b50e7-7843-4f85-a61d-50f00e2bc416 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.748371471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bd9b50e7-7843-4f85-a61d-50f00e2bc416 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:01:54 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:01:54.748553056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bd9b50e7-7843-4f85-a61d-50f00e2bc416 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	42f5349c90125       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5aa916845b003
	c96344f29939b       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   9 minutes ago       Running             kube-proxy                0                   ba27bdfc9d888
	6a2b730696b42       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   0175a496a1704
	ac747b676e948       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   9 minutes ago       Running             kube-scheduler            2                   9252b6099b201
	27d078cc8ea69       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   9 minutes ago       Running             etcd                      2                   a43553760c796
	5e21f96f0cb7d       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   9 minutes ago       Running             kube-controller-manager   2                   e3b844a6be5da
	5903c5fd077ea       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   9 minutes ago       Running             kube-apiserver            2                   7ab914de47588
	
	* 
	* ==> coredns [6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55036 - 5397 "HINFO IN 5672118673736255248.4507566907500056261. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03850101s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-473235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-473235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=default-k8s-diff-port-473235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-473235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 21:01:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 20:58:00 +0000   Mon, 26 Jun 2023 20:52:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 20:58:00 +0000   Mon, 26 Jun 2023 20:52:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 20:58:00 +0000   Mon, 26 Jun 2023 20:52:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 20:58:00 +0000   Mon, 26 Jun 2023 20:52:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.238
	  Hostname:    default-k8s-diff-port-473235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 92faed464cce40ff8645cc065dd0c89b
	  System UUID:                92faed46-4cce-40ff-8645-cc065dd0c89b
	  Boot ID:                    3de82d6f-cc55-451b-9343-bc4f633f6654
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-q7zms                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-default-k8s-diff-port-473235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-473235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-473235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-k4hzc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-473235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-74d5c6b9c-8qcw9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m32s (x8 over 9m32s)  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m32s (x8 over 9m32s)  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m32s (x7 over 9m32s)  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node default-k8s-diff-port-473235 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s                  kubelet          Node default-k8s-diff-port-473235 status is now: NodeReady
	  Normal  RegisteredNode           9m10s                  node-controller  Node default-k8s-diff-port-473235 event: Registered Node default-k8s-diff-port-473235 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun26 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073640] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jun26 20:47] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.236869] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151817] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.505159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.262372] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.116817] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.154347] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.127880] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.289153] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +17.851418] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +19.057076] kauditd_printk_skb: 29 callbacks suppressed
	[Jun26 20:52] systemd-fstab-generator[3546]: Ignoring "noauto" for root device
	[ +10.858660] systemd-fstab-generator[3874]: Ignoring "noauto" for root device
	[ +21.768540] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854] <==
	* {"level":"info","ts":"2023-06-26T20:52:27.381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 switched to configuration voters=(15476398761401151592)"}
	{"level":"info","ts":"2023-06-26T20:52:27.382Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"30634cbf5a4943f7","local-member-id":"d6c736ad0f9c7068","added-peer-id":"d6c736ad0f9c7068","added-peer-peer-urls":["https://192.168.61.238:2380"]}
	{"level":"info","ts":"2023-06-26T20:52:27.384Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-26T20:52:27.384Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.238:2380"}
	{"level":"info","ts":"2023-06-26T20:52:27.384Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.238:2380"}
	{"level":"info","ts":"2023-06-26T20:52:27.385Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d6c736ad0f9c7068","initial-advertise-peer-urls":["https://192.168.61.238:2380"],"listen-peer-urls":["https://192.168.61.238:2380"],"advertise-client-urls":["https://192.168.61.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-26T20:52:27.385Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 received MsgPreVoteResp from d6c736ad0f9c7068 at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 became candidate at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 received MsgVoteResp from d6c736ad0f9c7068 at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6c736ad0f9c7068 became leader at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:28.153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d6c736ad0f9c7068 elected leader d6c736ad0f9c7068 at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:28.155Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d6c736ad0f9c7068","local-member-attributes":"{Name:default-k8s-diff-port-473235 ClientURLs:[https://192.168.61.238:2379]}","request-path":"/0/members/d6c736ad0f9c7068/attributes","cluster-id":"30634cbf5a4943f7","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-26T20:52:28.155Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:28.155Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.156Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"30634cbf5a4943f7","local-member-id":"d6c736ad0f9c7068","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.158Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.238:2379"}
	
	* 
	* ==> kernel <==
	*  21:01:55 up 14 min,  0 users,  load average: 0.13, 0.22, 0.20
	Linux default-k8s-diff-port-473235 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4] <==
	* E0626 20:57:31.363446       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 20:57:31.364669       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 20:58:30.264396       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 20:58:30.264445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 20:58:31.363981       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 20:58:31.364085       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 20:58:31.364111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 20:58:31.365289       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 20:58:31.365380       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 20:58:31.365428       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 20:59:30.264937       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 20:59:30.265299       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0626 21:00:30.264320       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 21:00:30.264511       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:00:31.364866       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:00:31.365022       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:00:31.365266       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:00:31.366235       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:00:31.366360       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:00:31.366401       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:01:30.265467       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 21:01:30.265751       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f] <==
	* W0626 20:55:45.881470       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:56:15.319368       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:56:15.893354       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:56:45.326022       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:56:45.903938       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:57:15.332831       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:57:15.912982       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:57:45.340644       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:57:45.922767       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:58:15.346023       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:58:15.934567       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:58:45.351814       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:58:45.943426       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:59:15.358937       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:59:15.953508       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 20:59:45.373862       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:59:45.961028       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:00:15.380453       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:00:15.969964       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:00:45.387659       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:00:45.979246       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:01:15.392937       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:01:15.990749       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:01:45.398789       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:01:45.999120       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33] <==
	* I0626 20:52:51.077409       1 node.go:141] Successfully retrieved node IP: 192.168.61.238
	I0626 20:52:51.077580       1 server_others.go:110] "Detected node IP" address="192.168.61.238"
	I0626 20:52:51.077658       1 server_others.go:554] "Using iptables proxy"
	I0626 20:52:51.126534       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:52:51.126614       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:52:51.126975       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:52:51.128297       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:52:51.128368       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:52:51.130539       1 config.go:188] "Starting service config controller"
	I0626 20:52:51.130981       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:52:51.131327       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:52:51.131386       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:52:51.133313       1 config.go:315] "Starting node config controller"
	I0626 20:52:51.133578       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:52:51.232300       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:52:51.232331       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:52:51.234710       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289] <==
	* W0626 20:52:31.356334       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.356452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.416591       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:52:31.416652       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 20:52:31.454908       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.455033       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.545196       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:31.545440       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:31.554745       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:52:31.554876       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 20:52:31.561066       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.561244       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.631993       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:52:31.632104       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:52:31.641555       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:52:31.641607       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 20:52:31.753372       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:52:31.753448       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:52:31.844530       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.844596       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.845701       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:52:31.846102       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 20:52:31.846631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:52:31.846654       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0626 20:52:33.597347       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:47:04 UTC, ends at Mon 2023-06-26 21:01:55 UTC. --
	Jun 26 20:59:17 default-k8s-diff-port-473235 kubelet[3881]: E0626 20:59:17.304909    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 20:59:30 default-k8s-diff-port-473235 kubelet[3881]: E0626 20:59:30.306281    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 20:59:34 default-k8s-diff-port-473235 kubelet[3881]: E0626 20:59:34.402698    3881 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 20:59:34 default-k8s-diff-port-473235 kubelet[3881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 20:59:34 default-k8s-diff-port-473235 kubelet[3881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 20:59:34 default-k8s-diff-port-473235 kubelet[3881]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 20:59:45 default-k8s-diff-port-473235 kubelet[3881]: E0626 20:59:45.305734    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 20:59:57 default-k8s-diff-port-473235 kubelet[3881]: E0626 20:59:57.304974    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:00:12 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:00:12.306929    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:00:23 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:00:23.305076    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:00:34 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:00:34.403094    3881 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:00:34 default-k8s-diff-port-473235 kubelet[3881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:00:34 default-k8s-diff-port-473235 kubelet[3881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:00:34 default-k8s-diff-port-473235 kubelet[3881]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:00:37 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:00:37.305243    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:00:49 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:00:49.304926    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:01:02 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:01:02.306552    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:01:16 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:01:16.304915    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:01:29 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:01:29.304649    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:01:34 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:01:34.404204    3881 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:01:34 default-k8s-diff-port-473235 kubelet[3881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:01:34 default-k8s-diff-port-473235 kubelet[3881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:01:34 default-k8s-diff-port-473235 kubelet[3881]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:01:41 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:01:41.304807    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:01:54 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:01:54.306130    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	
	* 
	* ==> storage-provisioner [42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d] <==
	* I0626 20:52:50.790103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:52:50.815820       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:52:50.816010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:52:50.838388       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:52:50.838589       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-473235_07c492cf-25c5-493d-8be5-4c418e941ceb!
	I0626 20:52:50.838651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea9a2fb3-bc39-4436-8db0-dda6b489ab3d", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-473235_07c492cf-25c5-493d-8be5-4c418e941ceb became leader
	I0626 20:52:50.940762       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-473235_07c492cf-25c5-493d-8be5-4c418e941ceb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-8qcw9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 describe pod metrics-server-74d5c6b9c-8qcw9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-473235 describe pod metrics-server-74d5c6b9c-8qcw9: exit status 1 (64.875255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-8qcw9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-473235 describe pod metrics-server-74d5c6b9c-8qcw9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0626 20:55:23.874030   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490377 -n old-k8s-version-490377
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:03:49.874740773 +0000 UTC m=+5294.374768598
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-490377 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-490377 logs -n 25: (1.618828533s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490377        | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-123924                              | stopped-upgrade-123924       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603225 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | disable-driver-mounts-603225                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934450             | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC | 26 Jun 23 20:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490377             | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 20:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 20:44:35.222921   47779 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:44:35.223059   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223070   47779 out.go:309] Setting ErrFile to fd 2...
	I0626 20:44:35.223074   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223199   47779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:44:35.223797   47779 out.go:303] Setting JSON to false
	I0626 20:44:35.224674   47779 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5222,"bootTime":1687807053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:44:35.224734   47779 start.go:137] virtualization: kvm guest
	I0626 20:44:35.226901   47779 out.go:177] * [default-k8s-diff-port-473235] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:44:35.228842   47779 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:44:35.228804   47779 notify.go:220] Checking for updates...
	I0626 20:44:35.230224   47779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:44:35.231788   47779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:44:35.233239   47779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:44:35.234554   47779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:44:35.236823   47779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:44:35.238432   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:44:35.238825   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.238878   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.253669   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0626 20:44:35.254014   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.254589   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.254610   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.254907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.255090   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.255322   47779 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:44:35.255597   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.255627   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.269620   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0626 20:44:35.270027   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.270571   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.270599   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.270857   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.271037   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.302607   47779 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:44:35.303877   47779 start.go:297] selected driver: kvm2
	I0626 20:44:35.303889   47779 start.go:954] validating driver "kvm2" against &{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.303997   47779 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:44:35.304600   47779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.304681   47779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:44:35.319036   47779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:44:35.319459   47779 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 20:44:35.319499   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:44:35.319516   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:44:35.319532   47779 start_flags.go:319] config:
	{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-47323
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.319725   47779 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.321690   47779 out.go:177] * Starting control plane node default-k8s-diff-port-473235 in cluster default-k8s-diff-port-473235
	I0626 20:44:33.713644   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:35.323076   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:44:35.323119   47779 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 20:44:35.323145   47779 cache.go:57] Caching tarball of preloaded images
	I0626 20:44:35.323245   47779 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:44:35.323260   47779 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:44:35.323385   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:44:35.323607   47779 start.go:365] acquiring machines lock for default-k8s-diff-port-473235: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:44:39.793629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:42.865602   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:48.945651   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:52.017646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:58.097650   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:01.169629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:07.249647   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:10.321634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:16.401660   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:19.473641   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:25.553634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:28.625721   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:34.705617   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:37.777753   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:43.857659   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:46.929661   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:53.009637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:56.081646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:02.161637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:05.233633   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:11.313640   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:14.317303   47309 start.go:369] acquired machines lock for "no-preload-934450" in 2m47.59820508s
	I0626 20:46:14.317355   47309 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:14.317388   47309 fix.go:54] fixHost starting: 
	I0626 20:46:14.317703   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:14.317733   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:14.331991   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0626 20:46:14.332358   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:14.332862   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:46:14.332888   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:14.333180   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:14.333368   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:14.333556   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:46:14.334930   47309 fix.go:102] recreateIfNeeded on no-preload-934450: state=Stopped err=<nil>
	I0626 20:46:14.334954   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	W0626 20:46:14.335122   47309 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:14.336692   47309 out.go:177] * Restarting existing kvm2 VM for "no-preload-934450" ...
	I0626 20:46:14.338056   47309 main.go:141] libmachine: (no-preload-934450) Calling .Start
	I0626 20:46:14.338201   47309 main.go:141] libmachine: (no-preload-934450) Ensuring networks are active...
	I0626 20:46:14.339255   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network default is active
	I0626 20:46:14.339575   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network mk-no-preload-934450 is active
	I0626 20:46:14.339980   47309 main.go:141] libmachine: (no-preload-934450) Getting domain xml...
	I0626 20:46:14.340638   47309 main.go:141] libmachine: (no-preload-934450) Creating domain...
	I0626 20:46:15.550725   47309 main.go:141] libmachine: (no-preload-934450) Waiting to get IP...
	I0626 20:46:15.551641   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.552053   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.552125   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.552057   48070 retry.go:31] will retry after 285.629833ms: waiting for machine to come up
	I0626 20:46:15.839584   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.839950   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.839976   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.839920   48070 retry.go:31] will retry after 318.234269ms: waiting for machine to come up
	I0626 20:46:16.159361   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.159793   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.159823   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.159752   48070 retry.go:31] will retry after 486.280811ms: waiting for machine to come up
	I0626 20:46:14.315357   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:14.315401   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:46:14.317194   46683 machine.go:91] provisioned docker machine in 4m37.381545898s
	I0626 20:46:14.317230   46683 fix.go:56] fixHost completed within 4m37.403983922s
	I0626 20:46:14.317236   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 4m37.404002624s
	W0626 20:46:14.317252   46683 start.go:672] error starting host: provision: host is not running
	W0626 20:46:14.317326   46683 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0626 20:46:14.317333   46683 start.go:687] Will try again in 5 seconds ...
	I0626 20:46:16.647364   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.647777   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.647803   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.647721   48070 retry.go:31] will retry after 396.658606ms: waiting for machine to come up
	I0626 20:46:17.046604   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.047131   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.047156   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.047033   48070 retry.go:31] will retry after 741.382401ms: waiting for machine to come up
	I0626 20:46:17.789616   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.790035   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.790068   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.790014   48070 retry.go:31] will retry after 636.769895ms: waiting for machine to come up
	I0626 20:46:18.427899   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:18.428300   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:18.428326   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:18.428272   48070 retry.go:31] will retry after 869.736092ms: waiting for machine to come up
	I0626 20:46:19.299429   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:19.299742   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:19.299765   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:19.299717   48070 retry.go:31] will retry after 1.261709663s: waiting for machine to come up
	I0626 20:46:20.563421   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:20.563778   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:20.563807   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:20.563751   48070 retry.go:31] will retry after 1.280588584s: waiting for machine to come up
	I0626 20:46:19.318965   46683 start.go:365] acquiring machines lock for old-k8s-version-490377: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:46:21.846094   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:21.846530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:21.846557   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:21.846475   48070 retry.go:31] will retry after 1.542478163s: waiting for machine to come up
	I0626 20:46:23.391088   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:23.391530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:23.391559   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:23.391474   48070 retry.go:31] will retry after 2.115450652s: waiting for machine to come up
	I0626 20:46:25.508447   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:25.508882   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:25.508915   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:25.508826   48070 retry.go:31] will retry after 3.403199971s: waiting for machine to come up
	I0626 20:46:28.916347   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:28.916756   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:28.916782   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:28.916706   48070 retry.go:31] will retry after 3.011345508s: waiting for machine to come up
	I0626 20:46:33.094365   47605 start.go:369] acquired machines lock for "embed-certs-299839" in 2m23.878841424s
	I0626 20:46:33.094419   47605 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:33.094440   47605 fix.go:54] fixHost starting: 
	I0626 20:46:33.094827   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:33.094856   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:33.114045   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0626 20:46:33.114400   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:33.114927   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:46:33.114949   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:33.115244   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:33.115434   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:33.115573   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:46:33.116751   47605 fix.go:102] recreateIfNeeded on embed-certs-299839: state=Stopped err=<nil>
	I0626 20:46:33.116783   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	W0626 20:46:33.116944   47605 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:33.119904   47605 out.go:177] * Restarting existing kvm2 VM for "embed-certs-299839" ...
	I0626 20:46:33.121277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Start
	I0626 20:46:33.121442   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring networks are active...
	I0626 20:46:33.122062   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network default is active
	I0626 20:46:33.122397   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network mk-embed-certs-299839 is active
	I0626 20:46:33.122783   47605 main.go:141] libmachine: (embed-certs-299839) Getting domain xml...
	I0626 20:46:33.123400   47605 main.go:141] libmachine: (embed-certs-299839) Creating domain...
	I0626 20:46:31.930997   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931492   47309 main.go:141] libmachine: (no-preload-934450) Found IP for machine: 192.168.50.38
	I0626 20:46:31.931507   47309 main.go:141] libmachine: (no-preload-934450) Reserving static IP address...
	I0626 20:46:31.931524   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has current primary IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931877   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.931901   47309 main.go:141] libmachine: (no-preload-934450) DBG | skip adding static IP to network mk-no-preload-934450 - found existing host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"}
	I0626 20:46:31.931916   47309 main.go:141] libmachine: (no-preload-934450) Reserved static IP address: 192.168.50.38
	I0626 20:46:31.931928   47309 main.go:141] libmachine: (no-preload-934450) DBG | Getting to WaitForSSH function...
	I0626 20:46:31.931939   47309 main.go:141] libmachine: (no-preload-934450) Waiting for SSH to be available...
	I0626 20:46:31.934393   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934786   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.934814   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934954   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH client type: external
	I0626 20:46:31.934971   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa (-rw-------)
	I0626 20:46:31.935060   47309 main.go:141] libmachine: (no-preload-934450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:31.935091   47309 main.go:141] libmachine: (no-preload-934450) DBG | About to run SSH command:
	I0626 20:46:31.935112   47309 main.go:141] libmachine: (no-preload-934450) DBG | exit 0
	I0626 20:46:32.021036   47309 main.go:141] libmachine: (no-preload-934450) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:32.021357   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetConfigRaw
	I0626 20:46:32.022056   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.024943   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025390   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.025426   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025663   47309 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/config.json ...
	I0626 20:46:32.025851   47309 machine.go:88] provisioning docker machine ...
	I0626 20:46:32.025868   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.026092   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026257   47309 buildroot.go:166] provisioning hostname "no-preload-934450"
	I0626 20:46:32.026280   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026450   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.028213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028583   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.028618   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028699   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.028869   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029019   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029154   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.029415   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.029867   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.029887   47309 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934450 && echo "no-preload-934450" | sudo tee /etc/hostname
	I0626 20:46:32.150597   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934450
	
	I0626 20:46:32.150629   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.153096   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153441   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.153486   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153576   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.153773   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.153984   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.154125   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.154288   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.154697   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.154723   47309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:32.270792   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:32.270827   47309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:32.270890   47309 buildroot.go:174] setting up certificates
	I0626 20:46:32.270902   47309 provision.go:83] configureAuth start
	I0626 20:46:32.270922   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.271206   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.273824   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274189   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.274213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274310   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.276495   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.276896   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.276927   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.277062   47309 provision.go:138] copyHostCerts
	I0626 20:46:32.277118   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:32.277126   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:32.277188   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:32.277271   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:32.277278   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:32.277300   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:32.277351   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:32.277357   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:32.277393   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:32.277450   47309 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.no-preload-934450 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube no-preload-934450]
	I0626 20:46:32.417361   47309 provision.go:172] copyRemoteCerts
	I0626 20:46:32.417430   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:32.417452   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.419946   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420300   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.420331   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420501   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.420703   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.420864   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.421017   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.501807   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:46:32.524284   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:32.546766   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0626 20:46:32.569677   47309 provision.go:86] duration metric: configureAuth took 298.742863ms
	I0626 20:46:32.569711   47309 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:32.569925   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:32.570026   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.572516   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.572864   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.572901   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.573011   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.573178   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573350   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573492   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.573646   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.574084   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.574102   47309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:32.859482   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:32.859509   47309 machine.go:91] provisioned docker machine in 833.647496ms
	I0626 20:46:32.859519   47309 start.go:300] post-start starting for "no-preload-934450" (driver="kvm2")
	I0626 20:46:32.859527   47309 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:32.859543   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.859892   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:32.859942   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.862731   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863099   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.863131   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863250   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.863434   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.863570   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.863698   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.946748   47309 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:32.951257   47309 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:32.951278   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:32.951351   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:32.951436   47309 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:32.951516   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:32.959676   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:32.982687   47309 start.go:303] post-start completed in 123.154915ms
	I0626 20:46:32.982714   47309 fix.go:56] fixHost completed within 18.665325334s
	I0626 20:46:32.982763   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.985318   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985693   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.985725   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985868   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.986072   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986226   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986388   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.986547   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.986951   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.986968   47309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:33.094211   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812393.043726278
	
	I0626 20:46:33.094239   47309 fix.go:206] guest clock: 1687812393.043726278
	I0626 20:46:33.094248   47309 fix.go:219] Guest: 2023-06-26 20:46:33.043726278 +0000 UTC Remote: 2023-06-26 20:46:32.98271893 +0000 UTC m=+186.399054274 (delta=61.007348ms)
	I0626 20:46:33.094272   47309 fix.go:190] guest clock delta is within tolerance: 61.007348ms
	I0626 20:46:33.094277   47309 start.go:83] releasing machines lock for "no-preload-934450", held for 18.776943332s
	I0626 20:46:33.094309   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.094577   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:33.097365   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097744   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.097775   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097979   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098382   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098586   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098661   47309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:33.098712   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.098797   47309 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:33.098816   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.101252   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101554   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101580   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101599   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101719   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.101873   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.101951   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101981   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.102007   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102160   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.102182   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.102316   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.102443   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102551   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.210044   47309 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:33.215912   47309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:33.359955   47309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:33.366146   47309 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:33.366217   47309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:33.380504   47309 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:33.380526   47309 start.go:466] detecting cgroup driver to use...
	I0626 20:46:33.380579   47309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:33.393306   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:33.404983   47309 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:33.405038   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:33.418216   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:33.432337   47309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:33.531250   47309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:33.645556   47309 docker.go:212] disabling docker service ...
	I0626 20:46:33.645633   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:33.659515   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:33.671856   47309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:33.774921   47309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:33.883215   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:33.898847   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:33.917506   47309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:33.917580   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.928683   47309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:33.928743   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.939242   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.949833   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.960544   47309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:33.970988   47309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:33.979977   47309 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:33.980018   47309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:33.992692   47309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:34.001898   47309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:34.099514   47309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:34.265988   47309 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:34.266060   47309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:34.273678   47309 start.go:534] Will wait 60s for crictl version
	I0626 20:46:34.273739   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.277401   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:34.312548   47309 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:34.312630   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.360715   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.413882   47309 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:34.415181   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:34.417841   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418166   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:34.418189   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418410   47309 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:34.422651   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:34.434668   47309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:34.434717   47309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:34.465589   47309 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:34.465614   47309 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:46:34.465690   47309 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.465708   47309 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.465738   47309 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.465754   47309 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.465788   47309 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.465828   47309 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.465693   47309 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.465936   47309 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.467120   47309 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.467219   47309 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.467247   47309 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.467295   47309 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.467306   47309 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.467250   47309 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.636874   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.655059   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.683826   47309 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0626 20:46:34.683861   47309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.683928   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.702952   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.703028   47309 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0626 20:46:34.703071   47309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.703103   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.741790   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.741897   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0626 20:46:34.742006   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.746779   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.749151   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0626 20:46:34.759216   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.760925   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.763727   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.802768   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0626 20:46:34.802855   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0626 20:46:34.802879   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802936   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802879   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:34.875629   47309 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0626 20:46:34.875683   47309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.875741   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976009   47309 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0626 20:46:34.976048   47309 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.976082   47309 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0626 20:46:34.976100   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976116   47309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.976117   47309 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0626 20:46:34.976143   47309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.976156   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976179   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:35.433285   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.379704   47605 main.go:141] libmachine: (embed-certs-299839) Waiting to get IP...
	I0626 20:46:34.380770   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.381274   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.381362   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.381264   48187 retry.go:31] will retry after 291.849421ms: waiting for machine to come up
	I0626 20:46:34.674760   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.675247   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.675276   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.675192   48187 retry.go:31] will retry after 276.057593ms: waiting for machine to come up
	I0626 20:46:34.952573   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.953045   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.953077   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.953003   48187 retry.go:31] will retry after 360.478931ms: waiting for machine to come up
	I0626 20:46:35.315537   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.316036   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.316057   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.315988   48187 retry.go:31] will retry after 582.62072ms: waiting for machine to come up
	I0626 20:46:35.899816   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.900171   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.900232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.900154   48187 retry.go:31] will retry after 502.843212ms: waiting for machine to come up
	I0626 20:46:36.404792   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:36.405188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:36.405222   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:36.405134   48187 retry.go:31] will retry after 594.811848ms: waiting for machine to come up
	I0626 20:46:37.001827   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:37.002238   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:37.002264   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:37.002182   48187 retry.go:31] will retry after 1.067889284s: waiting for machine to come up
	I0626 20:46:38.071685   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:38.072135   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:38.072158   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:38.072094   48187 retry.go:31] will retry after 1.189834776s: waiting for machine to come up
	I0626 20:46:36.844137   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (2.041169028s)
	I0626 20:46:36.844171   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0626 20:46:36.844205   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.041210189s)
	I0626 20:46:36.844232   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0626 20:46:36.844245   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844257   47309 ssh_runner.go:235] Completed: which crictl: (1.868146562s)
	I0626 20:46:36.844293   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844300   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:36.844234   47309 ssh_runner.go:235] Completed: which crictl: (1.968483663s)
	I0626 20:46:36.844349   47309 ssh_runner.go:235] Completed: which crictl: (1.868154335s)
	I0626 20:46:36.844364   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:36.844380   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:36.844405   47309 ssh_runner.go:235] Completed: which crictl: (1.868235538s)
	I0626 20:46:36.844428   47309 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.411115015s)
	I0626 20:46:36.844448   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:36.844455   47309 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0626 20:46:36.844488   47309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:36.844513   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:39.895683   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.051359255s)
	I0626 20:46:39.895720   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0626 20:46:39.895808   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0: (3.051484848s)
	I0626 20:46:39.895824   47309 ssh_runner.go:235] Completed: which crictl: (3.051289954s)
	I0626 20:46:39.895855   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0626 20:46:39.895873   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1: (3.051494383s)
	I0626 20:46:39.895888   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:39.895908   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0626 20:46:39.895950   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:39.895909   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3: (3.051516174s)
	I0626 20:46:39.895990   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:39.896000   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3: (3.051535924s)
	I0626 20:46:39.896033   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0626 20:46:39.896034   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0626 20:46:39.896089   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.896102   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901778   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0626 20:46:39.901797   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901830   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.911439   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0626 20:46:39.911477   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0626 20:46:39.911517   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0626 20:46:39.943818   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0626 20:46:39.943947   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:41.278134   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.334156546s)
	I0626 20:46:41.278173   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0626 20:46:41.278135   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.376281957s)
	I0626 20:46:41.278187   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0626 20:46:41.278207   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:41.278256   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.263991   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:39.264402   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:39.264433   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:39.264371   48187 retry.go:31] will retry after 1.805262511s: waiting for machine to come up
	I0626 20:46:41.071232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:41.071707   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:41.071731   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:41.071662   48187 retry.go:31] will retry after 1.945519102s: waiting for machine to come up
	I0626 20:46:43.018581   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:43.019039   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:43.019075   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:43.018983   48187 retry.go:31] will retry after 2.83662877s: waiting for machine to come up
	I0626 20:46:43.745408   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.467115523s)
	I0626 20:46:43.745443   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0626 20:46:43.745479   47309 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:43.745551   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:45.011214   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.26563338s)
	I0626 20:46:45.011266   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0626 20:46:45.011296   47309 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.011349   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.858520   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:45.858992   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:45.859026   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:45.858941   48187 retry.go:31] will retry after 2.332305212s: waiting for machine to come up
	I0626 20:46:48.193085   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:48.193594   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:48.193625   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:48.193543   48187 retry.go:31] will retry after 2.846333425s: waiting for machine to come up
	I0626 20:46:52.634333   47779 start.go:369] acquired machines lock for "default-k8s-diff-port-473235" in 2m17.310683576s
	I0626 20:46:52.634385   47779 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:52.634413   47779 fix.go:54] fixHost starting: 
	I0626 20:46:52.634850   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:52.634890   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:52.654153   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0626 20:46:52.654638   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:52.655306   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:46:52.655337   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:52.655747   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:52.655952   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:46:52.656158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:46:52.657823   47779 fix.go:102] recreateIfNeeded on default-k8s-diff-port-473235: state=Stopped err=<nil>
	I0626 20:46:52.657850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	W0626 20:46:52.657997   47779 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:52.659722   47779 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-473235" ...
	I0626 20:46:51.043526   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044005   47605 main.go:141] libmachine: (embed-certs-299839) Found IP for machine: 192.168.39.51
	I0626 20:46:51.044034   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has current primary IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044045   47605 main.go:141] libmachine: (embed-certs-299839) Reserving static IP address...
	I0626 20:46:51.044351   47605 main.go:141] libmachine: (embed-certs-299839) Reserved static IP address: 192.168.39.51
	I0626 20:46:51.044368   47605 main.go:141] libmachine: (embed-certs-299839) Waiting for SSH to be available...
	I0626 20:46:51.044405   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.044439   47605 main.go:141] libmachine: (embed-certs-299839) DBG | skip adding static IP to network mk-embed-certs-299839 - found existing host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"}
	I0626 20:46:51.044456   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Getting to WaitForSSH function...
	I0626 20:46:51.046694   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047088   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.047121   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047312   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH client type: external
	I0626 20:46:51.047348   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa (-rw-------)
	I0626 20:46:51.047392   47605 main.go:141] libmachine: (embed-certs-299839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:51.047414   47605 main.go:141] libmachine: (embed-certs-299839) DBG | About to run SSH command:
	I0626 20:46:51.047432   47605 main.go:141] libmachine: (embed-certs-299839) DBG | exit 0
	I0626 20:46:51.137058   47605 main.go:141] libmachine: (embed-certs-299839) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:51.137408   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetConfigRaw
	I0626 20:46:51.197444   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.199920   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200306   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.200339   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200574   47605 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/config.json ...
	I0626 20:46:51.267260   47605 machine.go:88] provisioning docker machine ...
	I0626 20:46:51.267304   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:51.267709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.267921   47605 buildroot.go:166] provisioning hostname "embed-certs-299839"
	I0626 20:46:51.267943   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.268086   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.270429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270762   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.270790   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270886   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.271060   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271200   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271308   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.271475   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.271933   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.271950   47605 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-299839 && echo "embed-certs-299839" | sudo tee /etc/hostname
	I0626 20:46:51.403584   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-299839
	
	I0626 20:46:51.403622   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.406552   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.406876   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.406904   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.407053   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.407335   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407530   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407716   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.407883   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.408280   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.408300   47605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-299839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-299839/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-299839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:51.534666   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:51.534702   47605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:51.534745   47605 buildroot.go:174] setting up certificates
	I0626 20:46:51.534753   47605 provision.go:83] configureAuth start
	I0626 20:46:51.534766   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.535047   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.537753   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538113   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.538141   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.540471   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.540890   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.540922   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.541015   47605 provision.go:138] copyHostCerts
	I0626 20:46:51.541089   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:51.541099   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:51.541155   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:51.541237   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:51.541246   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:51.541277   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:51.541333   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:51.541339   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:51.541357   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:51.541434   47605 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-299839 san=[192.168.39.51 192.168.39.51 localhost 127.0.0.1 minikube embed-certs-299839]
	I0626 20:46:51.873317   47605 provision.go:172] copyRemoteCerts
	I0626 20:46:51.873396   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:51.873427   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.876293   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876659   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.876696   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876889   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.877100   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.877262   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.877430   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:51.970189   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:46:51.993067   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:52.015607   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0626 20:46:52.037359   47605 provision.go:86] duration metric: configureAuth took 502.581033ms
	I0626 20:46:52.037401   47605 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:52.037623   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:52.037714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.040949   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.041486   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041642   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.041859   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042061   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042235   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.042398   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.042916   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.042936   47605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:52.366045   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:52.366072   47605 machine.go:91] provisioned docker machine in 1.098783864s
	I0626 20:46:52.366083   47605 start.go:300] post-start starting for "embed-certs-299839" (driver="kvm2")
	I0626 20:46:52.366112   47605 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:52.366134   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.366443   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:52.366472   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.369138   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369570   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.369630   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369781   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.369957   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.370131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.370278   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.467055   47605 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:52.471203   47605 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:52.471222   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:52.471288   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:52.471394   47605 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:52.471510   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:52.484668   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.510268   47605 start.go:303] post-start completed in 144.162745ms
	I0626 20:46:52.510292   47605 fix.go:56] fixHost completed within 19.415851972s
	I0626 20:46:52.510315   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.513188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513629   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.513662   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513848   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.514062   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514228   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514415   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.514569   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.514968   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.514983   47605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:52.634177   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812412.582368193
	
	I0626 20:46:52.634199   47605 fix.go:206] guest clock: 1687812412.582368193
	I0626 20:46:52.634209   47605 fix.go:219] Guest: 2023-06-26 20:46:52.582368193 +0000 UTC Remote: 2023-06-26 20:46:52.510296584 +0000 UTC m=+163.430129249 (delta=72.071609ms)
	I0626 20:46:52.634237   47605 fix.go:190] guest clock delta is within tolerance: 72.071609ms
	I0626 20:46:52.634242   47605 start.go:83] releasing machines lock for "embed-certs-299839", held for 19.539848437s
	I0626 20:46:52.634277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.634623   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:52.637732   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638182   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.638220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638476   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639040   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639223   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639307   47605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:52.639346   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.639490   47605 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:52.639517   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.642288   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642923   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642968   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643016   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643351   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643492   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643528   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643564   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643763   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.643778   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643973   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643991   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.644109   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.644240   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.761230   47605 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:52.766865   47605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:52.919883   47605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:52.927218   47605 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:52.927290   47605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:52.948916   47605 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:52.948983   47605 start.go:466] detecting cgroup driver to use...
	I0626 20:46:52.949043   47605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:52.968673   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:52.982360   47605 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:52.982416   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:52.996984   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:53.015021   47605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:53.116692   47605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:53.251017   47605 docker.go:212] disabling docker service ...
	I0626 20:46:53.251096   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:53.268097   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:53.282223   47605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:53.412477   47605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:53.528110   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:53.541392   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:53.558736   47605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:53.558809   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.568482   47605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:53.568553   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.578178   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.587728   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.597231   47605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:53.606954   47605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:53.615250   47605 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:53.615308   47605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:53.628161   47605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:53.636477   47605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:53.755919   47605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:53.928744   47605 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:53.928823   47605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:53.934088   47605 start.go:534] Will wait 60s for crictl version
	I0626 20:46:53.934152   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:46:53.939345   47605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:53.971679   47605 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:53.971781   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.013494   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.062724   47605 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:54.064536   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:54.067854   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:54.068254   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068535   47605 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:54.072971   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:54.085981   47605 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:54.086048   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:52.661170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Start
	I0626 20:46:52.661331   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring networks are active...
	I0626 20:46:52.662042   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network default is active
	I0626 20:46:52.662444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network mk-default-k8s-diff-port-473235 is active
	I0626 20:46:52.663218   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Getting domain xml...
	I0626 20:46:52.663876   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Creating domain...
	I0626 20:46:53.987148   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting to get IP...
	I0626 20:46:53.988282   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988739   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988832   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:53.988735   48355 retry.go:31] will retry after 271.192351ms: waiting for machine to come up
	I0626 20:46:54.261343   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261825   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261857   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.261773   48355 retry.go:31] will retry after 362.262293ms: waiting for machine to come up
	I0626 20:46:54.625453   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625951   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625978   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.625859   48355 retry.go:31] will retry after 311.337455ms: waiting for machine to come up
	I0626 20:46:54.938519   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939023   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939053   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.938972   48355 retry.go:31] will retry after 446.154442ms: waiting for machine to come up
	I0626 20:46:52.039929   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.0285527s)
	I0626 20:46:52.039951   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0626 20:46:52.039974   47309 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.040015   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.786422   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0626 20:46:52.786468   47309 cache_images.go:123] Successfully loaded all cached images
	I0626 20:46:52.786474   47309 cache_images.go:92] LoadImages completed in 18.320847233s
	I0626 20:46:52.786562   47309 ssh_runner.go:195] Run: crio config
	I0626 20:46:52.857805   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:46:52.857833   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:52.857849   47309 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:52.857871   47309 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934450 NodeName:no-preload-934450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:52.858035   47309 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934450"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:52.858115   47309 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:52.858172   47309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:52.867179   47309 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:52.867253   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:52.875412   47309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 20:46:52.891376   47309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:52.906859   47309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0626 20:46:52.924927   47309 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:52.929059   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:52.942789   47309 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450 for IP: 192.168.50.38
	I0626 20:46:52.942825   47309 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:52.943011   47309 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:52.943059   47309 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:52.943138   47309 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.key
	I0626 20:46:52.943195   47309 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key.01da567d
	I0626 20:46:52.943236   47309 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key
	I0626 20:46:52.943341   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:52.943376   47309 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:52.943396   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:52.943435   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:52.943472   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:52.943509   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:52.943551   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.944147   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:52.971630   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:52.997892   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:53.024951   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 20:46:53.048462   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:53.075077   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:53.100318   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:53.129545   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:53.162187   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:53.191304   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:53.216166   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:53.240182   47309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:53.256447   47309 ssh_runner.go:195] Run: openssl version
	I0626 20:46:53.262053   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:53.272163   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277028   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277084   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.282611   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:53.296039   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:53.306923   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312778   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312825   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.320244   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:53.334066   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:53.347662   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353665   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353725   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.361150   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:53.374846   47309 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:53.380462   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:53.387949   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:53.393690   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:53.399208   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:53.405073   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:53.411265   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:53.417798   47309 kubeadm.go:404] StartCluster: {Name:no-preload-934450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiN
odeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:53.417916   47309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:53.417950   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:53.451231   47309 cri.go:89] found id: ""
	I0626 20:46:53.451307   47309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:53.460716   47309 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:53.460737   47309 kubeadm.go:636] restartCluster start
	I0626 20:46:53.460790   47309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:53.470518   47309 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.471961   47309 kubeconfig.go:92] found "no-preload-934450" server: "https://192.168.50.38:8443"
	I0626 20:46:53.475433   47309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:53.484054   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.484108   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:53.497348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.998070   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.998129   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.010119   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.498134   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.498223   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.512223   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.997432   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.997520   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.015317   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.497435   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.497516   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.512591   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.998180   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.998251   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.013135   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:56.497483   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.497573   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.512714   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.116295   47605 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:54.116360   47605 ssh_runner.go:195] Run: which lz4
	I0626 20:46:54.120344   47605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:46:54.124462   47605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:46:54.124490   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:46:55.959041   47605 crio.go:444] Took 1.838722 seconds to copy over tarball
	I0626 20:46:55.959115   47605 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:46:59.019532   47605 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060382374s)
	I0626 20:46:59.019555   47605 crio.go:451] Took 3.060486 seconds to extract the tarball
	I0626 20:46:59.019562   47605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:46:59.058687   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:59.102812   47605 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:46:59.102833   47605 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:46:59.102896   47605 ssh_runner.go:195] Run: crio config
	I0626 20:46:55.386479   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.386986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.387014   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:55.386901   48355 retry.go:31] will retry after 710.798834ms: waiting for machine to come up
	I0626 20:46:56.099580   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100079   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100112   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:56.100023   48355 retry.go:31] will retry after 921.187154ms: waiting for machine to come up
	I0626 20:46:57.022481   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022914   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.022859   48355 retry.go:31] will retry after 914.232442ms: waiting for machine to come up
	I0626 20:46:57.938375   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938823   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938845   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.938807   48355 retry.go:31] will retry after 1.411011331s: waiting for machine to come up
	I0626 20:46:59.351697   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352133   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352169   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:59.352076   48355 retry.go:31] will retry after 1.830031795s: waiting for machine to come up
	I0626 20:46:56.997450   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.997518   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.009310   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.497847   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.497929   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.513061   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.997474   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.997553   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.012610   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.498200   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.498274   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.513410   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.997938   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.998022   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.013357   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.497503   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.497581   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.514354   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.997445   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.997531   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.008894   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.497471   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.497555   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.508635   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.998326   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.998429   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.009836   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.498479   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.498593   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.510348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.159206   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:46:59.159236   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:59.159252   47605 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:59.159286   47605 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-299839 NodeName:embed-certs-299839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:59.159423   47605 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-299839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:59.159484   47605 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-299839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:59.159540   47605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:59.168802   47605 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:59.168882   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:59.177994   47605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0626 20:46:59.196041   47605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:59.214092   47605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0626 20:46:59.235187   47605 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:59.239440   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:59.251723   47605 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839 for IP: 192.168.39.51
	I0626 20:46:59.251772   47605 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:59.251943   47605 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:59.252017   47605 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:59.252134   47605 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/client.key
	I0626 20:46:59.252381   47605 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key.be9c3c95
	I0626 20:46:59.252482   47605 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key
	I0626 20:46:59.252626   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:59.252667   47605 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:59.252682   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:59.252718   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:59.252748   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:59.252805   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:59.252868   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:59.253555   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:59.280222   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:59.306244   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:59.331876   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:46:59.358710   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:59.385239   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:59.408963   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:59.433684   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:59.457235   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:59.480565   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:59.507918   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:59.532762   47605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:59.551283   47605 ssh_runner.go:195] Run: openssl version
	I0626 20:46:59.557079   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:59.568335   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573129   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573187   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.579116   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:59.589952   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:59.600935   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605668   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605735   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.611234   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:59.622615   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:59.633737   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638884   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638962   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.644559   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:59.655653   47605 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:59.660632   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:59.666672   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:59.672628   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:59.679194   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:59.685197   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:59.691190   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:59.697063   47605 kubeadm.go:404] StartCluster: {Name:embed-certs-299839 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:59.697146   47605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:59.697191   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:59.731197   47605 cri.go:89] found id: ""
	I0626 20:46:59.731256   47605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:59.741949   47605 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:59.741968   47605 kubeadm.go:636] restartCluster start
	I0626 20:46:59.742023   47605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:59.751837   47605 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.753347   47605 kubeconfig.go:92] found "embed-certs-299839" server: "https://192.168.39.51:8443"
	I0626 20:46:59.756955   47605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:59.766951   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.767023   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.779343   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.280064   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.280149   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.293730   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.780264   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.780347   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.793352   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.279827   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.279911   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.292843   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.779409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.779513   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.793293   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.279814   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.279902   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.296345   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.779892   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.779980   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.796346   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.280342   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.280417   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.292883   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.780156   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.780232   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.792667   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.184295   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184668   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184694   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:01.184605   48355 retry.go:31] will retry after 2.248796967s: waiting for machine to come up
	I0626 20:47:03.435559   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436054   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436086   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:03.435982   48355 retry.go:31] will retry after 2.012102985s: waiting for machine to come up
	I0626 20:47:01.998275   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.998353   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.014217   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.497731   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.497824   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.509505   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.998119   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.998202   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.009348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.485111   47309 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:03.485154   47309 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:03.485167   47309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:03.485216   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:03.516791   47309 cri.go:89] found id: ""
	I0626 20:47:03.516868   47309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:03.531523   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:03.540694   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:03.540761   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549498   47309 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549525   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:03.687202   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.779117   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.091878038s)
	I0626 20:47:04.779156   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.983470   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.059963   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.136199   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:05.136282   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:05.663265   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:06.163057   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:04.280330   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.280447   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.292565   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:04.780127   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.780225   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.797554   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.279900   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.279986   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.297853   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.779501   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.779594   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.794314   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.279916   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.280001   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.296829   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.779473   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.779566   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.793302   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.279802   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.279888   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.292407   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.779813   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.779914   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.793591   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.279846   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.279935   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.292196   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.779753   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.779859   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.792362   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.450681   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451186   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451216   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:05.451117   48355 retry.go:31] will retry after 3.442192384s: waiting for machine to come up
	I0626 20:47:08.895024   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:08.895520   48355 retry.go:31] will retry after 4.272351839s: waiting for machine to come up
	I0626 20:47:06.662926   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.163275   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.662871   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.689321   47309 api_server.go:72] duration metric: took 2.55312002s to wait for apiserver process to appear ...
	I0626 20:47:07.689348   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:07.689366   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:10.879412   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:10.879439   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:11.379823   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.386705   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.386736   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:11.880574   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.892733   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.892768   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:12.380392   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:12.389894   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:47:12.400274   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:12.400307   47309 api_server.go:131] duration metric: took 4.710951407s to wait for apiserver health ...
	I0626 20:47:12.400320   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:47:12.400332   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:12.402355   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:09.280409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:09.280512   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:09.293009   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:09.767593   47605 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:09.767636   47605 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:09.767648   47605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:09.767705   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:09.800380   47605 cri.go:89] found id: ""
	I0626 20:47:09.800465   47605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:09.819239   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:09.830482   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:09.830547   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840424   47605 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840451   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:09.979898   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.746785   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.960847   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.041569   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.122238   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:11.122322   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:11.640034   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.140386   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.640370   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.139901   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.639546   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.663848   47605 api_server.go:72] duration metric: took 2.54160148s to wait for apiserver process to appear ...
	I0626 20:47:13.663874   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:13.663905   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:14.587552   46683 start.go:369] acquired machines lock for "old-k8s-version-490377" in 55.268521785s
	I0626 20:47:14.587610   46683 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:47:14.587622   46683 fix.go:54] fixHost starting: 
	I0626 20:47:14.588035   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:47:14.588074   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:47:14.607186   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0626 20:47:14.607765   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:47:14.608361   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:47:14.608384   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:47:14.608697   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:47:14.608908   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:14.609056   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:47:14.610765   46683 fix.go:102] recreateIfNeeded on old-k8s-version-490377: state=Stopped err=<nil>
	I0626 20:47:14.610791   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	W0626 20:47:14.611905   46683 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:47:14.613885   46683 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490377" ...
	I0626 20:47:13.169996   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.170568   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Found IP for machine: 192.168.61.238
	I0626 20:47:13.170601   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserving static IP address...
	I0626 20:47:13.170622   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has current primary IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.171048   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.171080   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserved static IP address: 192.168.61.238
	I0626 20:47:13.171107   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | skip adding static IP to network mk-default-k8s-diff-port-473235 - found existing host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"}
	I0626 20:47:13.171128   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Getting to WaitForSSH function...
	I0626 20:47:13.171141   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for SSH to be available...
	I0626 20:47:13.173755   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174235   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.174265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174442   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH client type: external
	I0626 20:47:13.174485   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa (-rw-------)
	I0626 20:47:13.174518   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:13.174538   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | About to run SSH command:
	I0626 20:47:13.174553   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | exit 0
	I0626 20:47:13.265799   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:13.266189   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetConfigRaw
	I0626 20:47:13.266850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.269749   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270212   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.270253   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270498   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:47:13.270732   47779 machine.go:88] provisioning docker machine ...
	I0626 20:47:13.270758   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:13.270959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271112   47779 buildroot.go:166] provisioning hostname "default-k8s-diff-port-473235"
	I0626 20:47:13.271134   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.273679   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274087   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.274135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274273   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.274446   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274618   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274747   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.274940   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.275353   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.275369   47779 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-473235 && echo "default-k8s-diff-port-473235" | sudo tee /etc/hostname
	I0626 20:47:13.416565   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-473235
	
	I0626 20:47:13.416595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.420132   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420596   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.420670   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.421172   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421392   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.421821   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.422425   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.422457   47779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-473235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-473235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-473235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:13.566095   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:13.566131   47779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:13.566175   47779 buildroot.go:174] setting up certificates
	I0626 20:47:13.566192   47779 provision.go:83] configureAuth start
	I0626 20:47:13.566206   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.566509   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.569795   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570251   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.570283   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570476   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.573020   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573439   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.573475   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573704   47779 provision.go:138] copyHostCerts
	I0626 20:47:13.573782   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:13.573795   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:13.573859   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:13.573976   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:13.573987   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:13.574016   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:13.574094   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:13.574108   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:13.574134   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:13.574199   47779 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-473235 san=[192.168.61.238 192.168.61.238 localhost 127.0.0.1 minikube default-k8s-diff-port-473235]
	I0626 20:47:13.795155   47779 provision.go:172] copyRemoteCerts
	I0626 20:47:13.795207   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:13.795230   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.798039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798457   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.798512   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798706   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.798918   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.799130   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.799274   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:13.892185   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:13.921840   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 20:47:13.951311   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:13.980185   47779 provision.go:86] duration metric: configureAuth took 413.976937ms
	I0626 20:47:13.980216   47779 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:13.980460   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:47:13.980551   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.983814   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984217   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.984265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984604   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.984826   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985010   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985144   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.985344   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.985947   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.985979   47779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:14.317679   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:14.317702   47779 machine.go:91] provisioned docker machine in 1.046953094s
	I0626 20:47:14.317713   47779 start.go:300] post-start starting for "default-k8s-diff-port-473235" (driver="kvm2")
	I0626 20:47:14.317723   47779 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:14.317744   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.318064   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:14.318101   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.321001   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321358   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.321408   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321598   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.321806   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.321986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.322139   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.414722   47779 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:14.419797   47779 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:14.419822   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:14.419895   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:14.419990   47779 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:14.420118   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:14.430766   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:14.458086   47779 start.go:303] post-start completed in 140.355388ms
	I0626 20:47:14.458107   47779 fix.go:56] fixHost completed within 21.823695632s
	I0626 20:47:14.458125   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.460953   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461277   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.461308   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461472   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.461651   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.461841   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.462025   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.462175   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:14.462805   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:14.462823   47779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:14.587374   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812434.534091475
	
	I0626 20:47:14.587395   47779 fix.go:206] guest clock: 1687812434.534091475
	I0626 20:47:14.587403   47779 fix.go:219] Guest: 2023-06-26 20:47:14.534091475 +0000 UTC Remote: 2023-06-26 20:47:14.458110543 +0000 UTC m=+159.266861615 (delta=75.980932ms)
	I0626 20:47:14.587446   47779 fix.go:190] guest clock delta is within tolerance: 75.980932ms
	I0626 20:47:14.587456   47779 start.go:83] releasing machines lock for "default-k8s-diff-port-473235", held for 21.953095935s
	I0626 20:47:14.587492   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.587776   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:14.590654   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591111   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.591143   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591332   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.591869   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592074   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592151   47779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:14.592205   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.592451   47779 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:14.592489   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.595039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595271   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595585   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595615   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595659   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595698   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595901   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596076   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596118   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596311   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596344   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596466   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.596622   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.683637   47779 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:14.713738   47779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:14.869873   47779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:14.877719   47779 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:14.877815   47779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:14.893656   47779 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:14.893682   47779 start.go:466] detecting cgroup driver to use...
	I0626 20:47:14.893738   47779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:14.908419   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:14.921730   47779 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:14.921812   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:14.940659   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:14.955010   47779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:15.062849   47779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:15.193682   47779 docker.go:212] disabling docker service ...
	I0626 20:47:15.193810   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:15.210855   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:15.223362   47779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:15.348648   47779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:15.471398   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:15.496137   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:15.523967   47779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:47:15.524041   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.537188   47779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:15.537258   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.550404   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.563577   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.574958   47779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:15.588685   47779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:15.600611   47779 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:15.600680   47779 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:15.615658   47779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:15.628004   47779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:15.763410   47779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:15.982719   47779 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:15.982799   47779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:15.990799   47779 start.go:534] Will wait 60s for crictl version
	I0626 20:47:15.990864   47779 ssh_runner.go:195] Run: which crictl
	I0626 20:47:15.997709   47779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:16.041802   47779 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:16.041893   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.094989   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.151324   47779 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:47:12.403841   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:12.420028   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:12.459593   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:12.486209   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:12.486256   47309 system_pods.go:61] "coredns-5d78c9869d-dwkng" [8919aa0b-b8b6-4672-aa75-ea5ea1d27ef6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:12.486270   47309 system_pods.go:61] "etcd-no-preload-934450" [67a1367b-dc99-4613-8a75-796a64f13f0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:12.486281   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [7452cf79-3e8f-4dce-922a-a52115c7059f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:12.486291   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [a3393645-4d3d-4fab-a32f-c15ff3bfcdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:12.486300   47309 system_pods.go:61] "kube-proxy-phrv2" [d08fdd52-cc2a-43cb-84c4-170ad241527e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:12.486310   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [cc1c89f8-925a-4847-b693-08fbc4905119] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:12.486319   47309 system_pods.go:61] "metrics-server-74d5c6b9c-7szm5" [d94c68f7-4521-4366-b5db-38f420a78dd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:12.486331   47309 system_pods.go:61] "storage-provisioner" [7aa74f96-c306-4d70-a211-715b4877b15b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:12.486341   47309 system_pods.go:74] duration metric: took 26.722879ms to wait for pod list to return data ...
	I0626 20:47:12.486359   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:12.490745   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:12.490784   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:12.490809   47309 node_conditions.go:105] duration metric: took 4.437855ms to run NodePressure ...
	I0626 20:47:12.490830   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:12.794912   47309 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800827   47309 kubeadm.go:787] kubelet initialised
	I0626 20:47:12.800855   47309 kubeadm.go:788] duration metric: took 5.915334ms waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800865   47309 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:12.807162   47309 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:14.822450   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:14.614985   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Start
	I0626 20:47:14.615159   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring networks are active...
	I0626 20:47:14.615866   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network default is active
	I0626 20:47:14.616331   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network mk-old-k8s-version-490377 is active
	I0626 20:47:14.616785   46683 main.go:141] libmachine: (old-k8s-version-490377) Getting domain xml...
	I0626 20:47:14.617507   46683 main.go:141] libmachine: (old-k8s-version-490377) Creating domain...
	I0626 20:47:16.055502   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting to get IP...
	I0626 20:47:16.056448   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.056913   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.057009   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.056935   48478 retry.go:31] will retry after 281.770624ms: waiting for machine to come up
	I0626 20:47:16.340685   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.341472   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.341496   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.341268   48478 retry.go:31] will retry after 249.185886ms: waiting for machine to come up
	I0626 20:47:16.591867   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.592547   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.592718   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.592671   48478 retry.go:31] will retry after 327.814159ms: waiting for machine to come up
	I0626 20:47:17.910025   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:17.910061   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:18.411167   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.425310   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.425345   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:18.910567   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.920897   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.920933   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:19.410736   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:19.418228   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:47:19.428516   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:19.428551   47605 api_server.go:131] duration metric: took 5.764669652s to wait for apiserver health ...
	I0626 20:47:19.428561   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:47:19.428573   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:19.430711   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:16.152563   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:16.156250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156617   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:16.156644   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156894   47779 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:16.162480   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:16.180283   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:47:16.180336   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:16.227399   47779 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:47:16.227474   47779 ssh_runner.go:195] Run: which lz4
	I0626 20:47:16.233720   47779 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:16.240423   47779 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:16.240463   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:47:18.263416   47779 crio.go:444] Took 2.029753 seconds to copy over tarball
	I0626 20:47:18.263515   47779 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:16.837607   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:19.361799   47309 pod_ready.go:92] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.361869   47309 pod_ready.go:81] duration metric: took 6.554677083s waiting for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.361886   47309 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370122   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.370145   47309 pod_ready.go:81] duration metric: took 8.249243ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370157   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391052   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:21.391082   47309 pod_ready.go:81] duration metric: took 2.020917194s waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391096   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:16.922381   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.922923   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.922952   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.922873   48478 retry.go:31] will retry after 486.21568ms: waiting for machine to come up
	I0626 20:47:17.410676   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:17.411282   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:17.411305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:17.411227   48478 retry.go:31] will retry after 606.277374ms: waiting for machine to come up
	I0626 20:47:18.020296   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.021367   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.021400   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.021287   48478 retry.go:31] will retry after 576.843487ms: waiting for machine to come up
	I0626 20:47:18.599674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.600326   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.600352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.600221   48478 retry.go:31] will retry after 857.329718ms: waiting for machine to come up
	I0626 20:47:19.459545   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:19.460101   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:19.460125   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:19.460050   48478 retry.go:31] will retry after 1.017747035s: waiting for machine to come up
	I0626 20:47:20.479538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:20.480140   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:20.480178   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:20.480043   48478 retry.go:31] will retry after 1.379789146s: waiting for machine to come up
	I0626 20:47:19.432325   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:19.461944   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:19.498519   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:19.512703   47605 system_pods.go:59] 9 kube-system pods found
	I0626 20:47:19.512831   47605 system_pods.go:61] "coredns-5d78c9869d-dz48f" [87a67e95-a071-4865-902b-0e401e852456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512860   47605 system_pods.go:61] "coredns-5d78c9869d-lbfsr" [adee7e6b-88b2-412e-bb2d-fc0939bca149] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512905   47605 system_pods.go:61] "etcd-embed-certs-299839" [8aefd012-6a54-4e75-afc9-cc8385212eb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:19.512937   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [e178b5e8-445c-444f-965e-051233c2fa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:19.512971   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [e965e4af-a673-4b93-bb63-e7bfc0f9514d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:19.512995   47605 system_pods.go:61] "kube-proxy-q5khr" [6c11d667-3490-4417-8e0c-373fe25d06b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:19.513014   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [0385958c-3f22-4eb8-bdac-cbaeb52fe9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:19.513050   47605 system_pods.go:61] "metrics-server-74d5c6b9c-gb6b2" [b5a15d68-23ee-4274-a147-db6f2eef97e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:19.513074   47605 system_pods.go:61] "storage-provisioner" [42bd8483-f594-4bf9-8c32-9688d1d99523] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:19.513093   47605 system_pods.go:74] duration metric: took 14.550735ms to wait for pod list to return data ...
	I0626 20:47:19.513125   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:19.519356   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:19.519455   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:19.519513   47605 node_conditions.go:105] duration metric: took 6.36764ms to run NodePressure ...
	I0626 20:47:19.519573   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:19.935407   47605 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943592   47605 kubeadm.go:787] kubelet initialised
	I0626 20:47:19.943622   47605 kubeadm.go:788] duration metric: took 8.187833ms waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943633   47605 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:19.951319   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.957985   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958016   47605 pod_ready.go:81] duration metric: took 6.605612ms waiting for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.958027   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958037   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.965229   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965312   47605 pod_ready.go:81] duration metric: took 7.251456ms waiting for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.965335   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965391   47605 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:22.010596   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:21.752755   47779 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.48920102s)
	I0626 20:47:21.752790   47779 crio.go:451] Took 3.489344 seconds to extract the tarball
	I0626 20:47:21.752802   47779 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:21.800026   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:21.844486   47779 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:47:21.844504   47779 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:47:21.844573   47779 ssh_runner.go:195] Run: crio config
	I0626 20:47:21.924367   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:21.924397   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:21.924411   47779 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:21.924431   47779 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-473235 NodeName:default-k8s-diff-port-473235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:47:21.924593   47779 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-473235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:21.924685   47779 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-473235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0626 20:47:21.924756   47779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:47:21.934851   47779 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:21.934951   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:21.944791   47779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0626 20:47:21.963087   47779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:21.981936   47779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0626 20:47:22.002207   47779 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:22.006443   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:22.019555   47779 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235 for IP: 192.168.61.238
	I0626 20:47:22.019591   47779 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:22.019794   47779 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:22.019859   47779 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:22.019983   47779 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.key
	I0626 20:47:22.020069   47779 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key.761b3e7f
	I0626 20:47:22.020126   47779 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key
	I0626 20:47:22.020257   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:22.020296   47779 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:22.020309   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:22.020340   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:22.020376   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:22.020418   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:22.020475   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:22.021354   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:22.045205   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:22.069269   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:22.092387   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:22.120395   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:22.143199   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:22.167864   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:22.192223   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:22.218085   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:22.243249   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:22.269200   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:22.294015   47779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:22.313139   47779 ssh_runner.go:195] Run: openssl version
	I0626 20:47:22.319998   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:22.330864   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337082   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337144   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.343158   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:22.354507   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:22.366438   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371070   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371127   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.376858   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:22.387928   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:22.398665   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403091   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403139   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.410314   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:22.421729   47779 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:22.426373   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:22.432450   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:22.438093   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:22.446065   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:22.452103   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:22.457940   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:22.464492   47779 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:22.464647   47779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:22.464707   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:22.497723   47779 cri.go:89] found id: ""
	I0626 20:47:22.497803   47779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:22.508914   47779 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:22.508940   47779 kubeadm.go:636] restartCluster start
	I0626 20:47:22.508994   47779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:22.519855   47779 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:22.521400   47779 kubeconfig.go:92] found "default-k8s-diff-port-473235" server: "https://192.168.61.238:8444"
	I0626 20:47:22.525126   47779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:22.536252   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:22.536311   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:22.548698   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.049731   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.049805   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.062575   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.548966   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.549050   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.566351   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.048839   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.048917   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.065016   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.549110   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.549211   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.563150   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:25.049739   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.049828   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.066148   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.496598   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.496624   47309 pod_ready.go:81] duration metric: took 2.105519396s waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.496637   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504045   47309 pod_ready.go:92] pod "kube-proxy-phrv2" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.504067   47309 pod_ready.go:81] duration metric: took 7.42294ms waiting for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504078   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022096   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:25.022123   47309 pod_ready.go:81] duration metric: took 1.518037516s waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022135   47309 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.861798   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:21.981234   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:21.981272   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:21.862292   48478 retry.go:31] will retry after 2.138021733s: waiting for machine to come up
	I0626 20:47:24.002651   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:24.003184   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:24.003215   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:24.003122   48478 retry.go:31] will retry after 2.016131828s: waiting for machine to come up
	I0626 20:47:26.020987   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:26.021487   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:26.021511   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:26.021427   48478 retry.go:31] will retry after 2.317082546s: waiting for machine to come up
	I0626 20:47:24.497636   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:26.997525   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:27.997348   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:27.997394   47605 pod_ready.go:81] duration metric: took 8.031967272s waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:27.997408   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.548979   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.549054   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.566040   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.049569   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.049636   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.061513   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.548864   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.548952   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.566095   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.049674   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.049818   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.067169   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.549748   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.549831   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.568977   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.048852   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.048921   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.064935   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.549510   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.549614   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.562781   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.049396   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.049482   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.063237   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.548762   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.548853   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.561289   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:30.048758   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.048832   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.061079   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.040010   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:29.536317   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.537367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:28.340238   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:28.340738   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:28.340774   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:28.340660   48478 retry.go:31] will retry after 3.9887538s: waiting for machine to come up
	I0626 20:47:30.014224   47605 pod_ready.go:102] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.016636   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.016660   47605 pod_ready.go:81] duration metric: took 3.019245103s waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.016669   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022769   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.022794   47605 pod_ready.go:81] duration metric: took 6.118745ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022806   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.031975   47605 pod_ready.go:92] pod "kube-proxy-q5khr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.032004   47605 pod_ready.go:81] duration metric: took 9.189713ms waiting for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.032015   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040203   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.040231   47605 pod_ready.go:81] duration metric: took 8.207477ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040244   47605 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:33.054175   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:30.549812   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.549897   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.562540   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.049000   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.049071   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.061358   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.549602   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.549664   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.562690   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.049131   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:32.049223   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:32.061951   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.536775   47779 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:32.536827   47779 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:32.536843   47779 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:32.536914   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:32.571353   47779 cri.go:89] found id: ""
	I0626 20:47:32.571434   47779 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:32.588931   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:32.599519   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:32.599585   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610183   47779 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610212   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:32.738386   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.418561   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.612946   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.740311   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.830927   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:33.830992   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.372343   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.872109   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:33.542864   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:36.037521   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:32.332668   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:32.333139   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:32.333169   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:32.333084   48478 retry.go:31] will retry after 3.571549947s: waiting for machine to come up
	I0626 20:47:35.906478   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.906962   46683 main.go:141] libmachine: (old-k8s-version-490377) Found IP for machine: 192.168.72.111
	I0626 20:47:35.906994   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has current primary IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.907004   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserving static IP address...
	I0626 20:47:35.907527   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.907573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | skip adding static IP to network mk-old-k8s-version-490377 - found existing host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"}
	I0626 20:47:35.907588   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserved static IP address: 192.168.72.111
	I0626 20:47:35.907605   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting for SSH to be available...
	I0626 20:47:35.907658   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Getting to WaitForSSH function...
	I0626 20:47:35.909932   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910346   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.910383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH client type: external
	I0626 20:47:35.910573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa (-rw-------)
	I0626 20:47:35.910604   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:35.910620   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | About to run SSH command:
	I0626 20:47:35.910635   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | exit 0
	I0626 20:47:36.006056   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:36.006429   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetConfigRaw
	I0626 20:47:36.007160   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.010144   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010519   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.010551   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010863   46683 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/config.json ...
	I0626 20:47:36.011106   46683 machine.go:88] provisioning docker machine ...
	I0626 20:47:36.011130   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.011366   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011542   46683 buildroot.go:166] provisioning hostname "old-k8s-version-490377"
	I0626 20:47:36.011561   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011705   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.014236   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014643   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.014674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014821   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.015013   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015156   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015371   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.015595   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.016010   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.016029   46683 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490377 && echo "old-k8s-version-490377" | sudo tee /etc/hostname
	I0626 20:47:36.160735   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490377
	
	I0626 20:47:36.160797   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.163857   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164373   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.164425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164566   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.164778   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.164983   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.165128   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.165311   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.166001   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.166030   46683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:36.302740   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:36.302789   46683 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:36.302839   46683 buildroot.go:174] setting up certificates
	I0626 20:47:36.302852   46683 provision.go:83] configureAuth start
	I0626 20:47:36.302868   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.303151   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.305958   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306411   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.306439   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306667   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.309069   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309447   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.309480   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309538   46683 provision.go:138] copyHostCerts
	I0626 20:47:36.309622   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:36.309635   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:36.309702   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:36.309813   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:36.309830   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:36.309868   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:36.309938   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:36.309947   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:36.309970   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:36.310026   46683 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490377 san=[192.168.72.111 192.168.72.111 localhost 127.0.0.1 minikube old-k8s-version-490377]
	I0626 20:47:36.441131   46683 provision.go:172] copyRemoteCerts
	I0626 20:47:36.441183   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:36.441204   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.444557   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445034   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.445067   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445311   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.445540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.445700   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.445857   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:36.542375   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:36.570185   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:47:36.596725   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:36.622954   46683 provision.go:86] duration metric: configureAuth took 320.087643ms
	I0626 20:47:36.622983   46683 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:36.623205   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:47:36.623301   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.626305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626634   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.626666   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626856   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.627048   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627224   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627349   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.627520   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.627929   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.627954   46683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:36.963666   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:36.963695   46683 machine.go:91] provisioned docker machine in 952.57418ms
	I0626 20:47:36.963707   46683 start.go:300] post-start starting for "old-k8s-version-490377" (driver="kvm2")
	I0626 20:47:36.963719   46683 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:36.963747   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.964067   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:36.964099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.966948   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.967383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967528   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.967735   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.967900   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.968052   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.070309   46683 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:37.075040   46683 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:37.075064   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:37.075125   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:37.075208   46683 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:37.075306   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:37.086362   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:37.110475   46683 start.go:303] post-start completed in 146.752359ms
	I0626 20:47:37.110502   46683 fix.go:56] fixHost completed within 22.522880386s
	I0626 20:47:37.110525   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.113530   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.113925   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.113961   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.114168   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.114372   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114577   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114730   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.114896   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:37.115549   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:37.115572   46683 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:37.247352   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812457.183569581
	
	I0626 20:47:37.247376   46683 fix.go:206] guest clock: 1687812457.183569581
	I0626 20:47:37.247386   46683 fix.go:219] Guest: 2023-06-26 20:47:37.183569581 +0000 UTC Remote: 2023-06-26 20:47:37.110506986 +0000 UTC m=+360.350082215 (delta=73.062595ms)
	I0626 20:47:37.247410   46683 fix.go:190] guest clock delta is within tolerance: 73.062595ms
	I0626 20:47:37.247416   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 22.659832787s
	I0626 20:47:37.247442   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.247723   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:37.250740   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251154   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.251194   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251316   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.251835   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252015   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252101   46683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:37.252144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.252251   46683 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:37.252273   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.255147   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255231   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255440   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255464   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255584   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.255756   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.255765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255792   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255930   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.255946   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.256080   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.256099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.256206   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.256301   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.370571   46683 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:37.376548   46683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:37.531359   46683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:37.540038   46683 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:37.540104   46683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:37.556531   46683 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:37.556554   46683 start.go:466] detecting cgroup driver to use...
	I0626 20:47:37.556620   46683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:37.574430   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:37.586766   46683 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:37.586829   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:37.599572   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:37.612901   46683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:37.717489   46683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:37.851503   46683 docker.go:212] disabling docker service ...
	I0626 20:47:37.851576   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:37.864932   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:37.877087   46683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:37.990007   46683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:38.107613   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:38.122183   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:38.141502   46683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:47:38.141567   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.152052   46683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:38.152128   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.161786   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.172779   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.182823   46683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:38.192695   46683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:38.201322   46683 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:38.201404   46683 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:38.213549   46683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:38.225080   46683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:38.336249   46683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:38.508323   46683 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:38.508443   46683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:38.514430   46683 start.go:534] Will wait 60s for crictl version
	I0626 20:47:38.514496   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:38.518918   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:38.559642   46683 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:38.559731   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.616720   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.678573   46683 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0626 20:47:35.555132   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.053446   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:35.373039   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.872006   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.895929   47779 api_server.go:72] duration metric: took 2.064992302s to wait for apiserver process to appear ...
	I0626 20:47:35.895959   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:35.895982   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:35.896602   47779 api_server.go:269] stopped: https://192.168.61.238:8444/healthz: Get "https://192.168.61.238:8444/healthz": dial tcp 192.168.61.238:8444: connect: connection refused
	I0626 20:47:36.397305   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.868801   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.868839   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.868854   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.907251   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.907280   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.907310   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.921394   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.921428   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:40.397045   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.405040   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.405071   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:40.897690   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.904374   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.904424   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:41.396883   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:41.404743   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:47:41.420191   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:41.420219   47779 api_server.go:131] duration metric: took 5.524252602s to wait for apiserver health ...
	I0626 20:47:41.420231   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:41.420249   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:41.422187   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:38.537628   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:40.538267   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.680019   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:38.682934   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683263   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:38.683294   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683534   46683 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:38.687976   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:38.701534   46683 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 20:47:38.701610   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:38.739497   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:38.739584   46683 ssh_runner.go:195] Run: which lz4
	I0626 20:47:38.744080   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:38.748755   46683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:38.748792   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0626 20:47:40.654759   46683 crio.go:444] Took 1.910714 seconds to copy over tarball
	I0626 20:47:40.654830   46683 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:40.057751   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:42.555707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:41.423617   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:41.447117   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:41.485897   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:41.505667   47779 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:41.505714   47779 system_pods.go:61] "coredns-5d78c9869d-78zrr" [2927dce3-aa13-4ed4-b5a4-bc1b101ec044] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:41.505730   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [5bbba401-cfdd-4e97-ac44-3d1410344b23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:41.505742   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [90d064bc-d31f-4690-b100-8979cdd518c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:41.505755   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [3f686efe-3c90-42ed-a1b9-2cda3e7e49b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:41.505773   47779 system_pods.go:61] "kube-proxy-7t2dk" [bebeb55d-8c7d-4543-9ee1-adbd946904f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:41.505786   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [c2436cf6-0128-425c-9db3-b3d01e5fb5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:41.505799   47779 system_pods.go:61] "metrics-server-74d5c6b9c-swcxn" [81e42c6b-4c7d-40b1-bd4a-ccf7ce2dea17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:41.505811   47779 system_pods.go:61] "storage-provisioner" [18d1c7dc-00a6-4842-b441-f3468adde4ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:41.505822   47779 system_pods.go:74] duration metric: took 19.895923ms to wait for pod list to return data ...
	I0626 20:47:41.505833   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:41.515165   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:41.515201   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:41.515215   47779 node_conditions.go:105] duration metric: took 9.372368ms to run NodePressure ...
	I0626 20:47:41.515243   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:41.848353   47779 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854780   47779 kubeadm.go:787] kubelet initialised
	I0626 20:47:41.854805   47779 kubeadm.go:788] duration metric: took 6.420882ms waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854814   47779 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:41.861323   47779 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.867181   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867214   47779 pod_ready.go:81] duration metric: took 5.86597ms waiting for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.867225   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867235   47779 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.872900   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872928   47779 pod_ready.go:81] duration metric: took 5.684109ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.872940   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872948   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.881471   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881501   47779 pod_ready.go:81] duration metric: took 8.543041ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.881513   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881531   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.892246   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892292   47779 pod_ready.go:81] duration metric: took 10.741136ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.892310   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892325   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297272   47779 pod_ready.go:92] pod "kube-proxy-7t2dk" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:43.297299   47779 pod_ready.go:81] duration metric: took 1.404965565s waiting for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297308   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:42.544224   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.846930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.389432   46683 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.73456858s)
	I0626 20:47:44.389462   46683 crio.go:451] Took 3.734677 seconds to extract the tarball
	I0626 20:47:44.389480   46683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:44.438169   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:44.478220   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:44.478250   46683 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:47:44.478337   46683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.478364   46683 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.478383   46683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.478384   46683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.478450   46683 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0626 20:47:44.478365   46683 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.478345   46683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.478339   46683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479752   46683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.479758   46683 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.479760   46683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.479759   46683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.479748   46683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.479802   46683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.479810   46683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479817   46683 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0626 20:47:44.681554   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720619   46683 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0626 20:47:44.720677   46683 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720730   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.724810   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.753258   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0626 20:47:44.765072   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.767167   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.768723   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0626 20:47:44.769466   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.769474   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.807428   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.904206   46683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0626 20:47:44.904243   46683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0626 20:47:44.904250   46683 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.904261   46683 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926166   46683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0626 20:47:44.926203   46683 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.926204   46683 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0626 20:47:44.926222   46683 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.926222   46683 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0626 20:47:44.926248   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926247   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926251   46683 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0626 20:47:44.926365   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936135   46683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0626 20:47:44.936175   46683 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.936236   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936252   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.936274   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.940272   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.940352   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0626 20:47:44.940409   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.952147   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:45.031640   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0626 20:47:45.031677   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0626 20:47:45.061947   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0626 20:47:45.062070   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0626 20:47:45.062166   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0626 20:47:45.062261   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.062279   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0626 20:47:45.067511   46683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0626 20:47:45.067561   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0626 20:47:45.094726   46683 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.094780   46683 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.384887   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:45.947601   46683 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0626 20:47:45.947707   46683 cache_images.go:92] LoadImages completed in 1.469441722s
	W0626 20:47:45.947778   46683 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0626 20:47:45.947863   46683 ssh_runner.go:195] Run: crio config
	I0626 20:47:46.009928   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:47:46.009955   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:46.009968   46683 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:46.009987   46683 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490377 NodeName:old-k8s-version-490377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 20:47:46.010140   46683 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490377"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-490377
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.111:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:46.010224   46683 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490377 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:47:46.010284   46683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0626 20:47:46.023111   46683 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:46.023196   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:46.034988   46683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0626 20:47:46.056824   46683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:46.077802   46683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0626 20:47:46.102465   46683 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:46.107391   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:46.121242   46683 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377 for IP: 192.168.72.111
	I0626 20:47:46.121277   46683 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:46.121466   46683 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:46.121520   46683 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:46.121635   46683 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.key
	I0626 20:47:46.121735   46683 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key.760f2aeb
	I0626 20:47:46.121789   46683 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key
	I0626 20:47:46.121928   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:46.121970   46683 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:46.121985   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:46.122024   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:46.122063   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:46.122098   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:46.122158   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:46.123026   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:46.149101   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:46.179305   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:46.207421   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:46.233407   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:46.259148   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:46.284728   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:46.312152   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:46.341061   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:46.370455   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:46.398160   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:46.424710   46683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:46.446379   46683 ssh_runner.go:195] Run: openssl version
	I0626 20:47:46.452825   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:46.466808   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472676   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472760   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.479077   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:46.490061   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:46.501801   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.506966   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.507034   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.513146   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:46.523600   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:46.534659   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540612   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540677   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.548499   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:46.562786   46683 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:46.569679   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:46.576129   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:46.582331   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:46.588334   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:46.595635   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:46.603058   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:46.611126   46683 kubeadm.go:404] StartCluster: {Name:old-k8s-version-490377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:46.611211   46683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:46.611277   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:46.650099   46683 cri.go:89] found id: ""
	I0626 20:47:46.650177   46683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:46.660940   46683 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:46.660964   46683 kubeadm.go:636] restartCluster start
	I0626 20:47:46.661022   46683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:46.671400   46683 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:46.672450   46683 kubeconfig.go:92] found "old-k8s-version-490377" server: "https://192.168.72.111:8443"
	I0626 20:47:46.675477   46683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:46.684496   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:46.684568   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:46.695719   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:45.056085   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.554295   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:45.865956   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:48.003697   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.505286   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:49.505314   47779 pod_ready.go:81] duration metric: took 6.207998312s waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:49.505328   47779 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:47.037142   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.037207   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.535460   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.196149   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.196252   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.211751   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:47.696286   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.696381   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.707472   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.195967   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.196041   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.207809   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.696375   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.696449   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.708571   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.196097   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.196176   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.207717   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.696692   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.696768   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.708954   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.196531   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.196611   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.209111   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.696563   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.696648   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.708744   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.196237   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.196305   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.207654   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.695908   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.695988   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.708029   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.056186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.057083   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.519442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.520019   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.536833   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.036673   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.196170   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.196233   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.208953   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:52.696518   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.696600   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.707537   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.196046   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.196113   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.207272   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.695791   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.695873   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.706845   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.196452   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.196530   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.208048   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.696169   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.696236   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.707640   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.195889   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.195968   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.207560   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.695899   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.695978   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.707573   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.195900   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:56.195973   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:56.207335   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.685138   46683 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:56.685165   46683 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:56.685180   46683 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:56.685239   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:56.719427   46683 cri.go:89] found id: ""
	I0626 20:47:56.719494   46683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:56.735328   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:56.747355   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:56.747420   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756129   46683 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756156   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:54.554213   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:57.052902   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:59.055349   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.018337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.025514   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.039195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.538216   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.883656   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.423073   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.641018   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.751205   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.840521   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:57.840645   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.355178   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.854929   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.355164   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.385611   46683 api_server.go:72] duration metric: took 1.545094971s to wait for apiserver process to appear ...
	I0626 20:47:59.385632   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:59.385650   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:01.553510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.554922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.520442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.021809   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.040767   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.535801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:04.386860   46683 api_server.go:269] stopped: https://192.168.72.111:8443/healthz: Get "https://192.168.72.111:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0626 20:48:04.888001   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:05.958461   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:48:05.958486   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:48:05.958498   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.017029   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.017061   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.387577   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.394038   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.394072   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.887033   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.902891   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.902931   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:07.387632   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:07.393827   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:48:07.402591   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:48:07.402618   46683 api_server.go:131] duration metric: took 8.016980167s to wait for apiserver health ...
	I0626 20:48:07.402628   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:48:07.402639   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:48:07.404494   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:48:06.054185   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:08.055165   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.520306   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.521293   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:10.021358   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.537058   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:09.537801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.405919   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:48:07.416748   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:48:07.436249   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:48:07.445695   46683 system_pods.go:59] 7 kube-system pods found
	I0626 20:48:07.445732   46683 system_pods.go:61] "coredns-5644d7b6d9-5lcxw" [8e1a5fff-55d8-4d32-ae6f-c7694c8b5878] Running
	I0626 20:48:07.445741   46683 system_pods.go:61] "etcd-old-k8s-version-490377" [3fff7ab3-7ac7-4417-b3b8-9794f427c880] Running
	I0626 20:48:07.445750   46683 system_pods.go:61] "kube-apiserver-old-k8s-version-490377" [1b8e6b87-0b15-4586-8133-2dd33ac0b069] Running
	I0626 20:48:07.445771   46683 system_pods.go:61] "kube-controller-manager-old-k8s-version-490377" [2635a03c-884d-4245-a8ef-cb02e14443b8] Running
	I0626 20:48:07.445792   46683 system_pods.go:61] "kube-proxy-64btm" [0a8ee3c6-93a1-4989-94d0-209e8c655a64] Running
	I0626 20:48:07.445805   46683 system_pods.go:61] "kube-scheduler-old-k8s-version-490377" [2a6905a0-4f64-4cab-9b6d-55c708c07f8d] Running
	I0626 20:48:07.445815   46683 system_pods.go:61] "storage-provisioner" [9bf36874-b862-41f9-89d4-2d900adc2003] Running
	I0626 20:48:07.445826   46683 system_pods.go:74] duration metric: took 9.553318ms to wait for pod list to return data ...
	I0626 20:48:07.445836   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:48:07.450777   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:48:07.450816   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:48:07.450831   46683 node_conditions.go:105] duration metric: took 4.985221ms to run NodePressure ...
	I0626 20:48:07.450854   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:48:07.693070   46683 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:48:07.696336   46683 retry.go:31] will retry after 291.332727ms: kubelet not initialised
	I0626 20:48:07.992856   46683 retry.go:31] will retry after 210.561512ms: kubelet not initialised
	I0626 20:48:08.208369   46683 retry.go:31] will retry after 371.110023ms: kubelet not initialised
	I0626 20:48:08.585342   46683 retry.go:31] will retry after 1.199452561s: kubelet not initialised
	I0626 20:48:09.790625   46683 retry.go:31] will retry after 923.734482ms: kubelet not initialised
	I0626 20:48:10.719166   46683 retry.go:31] will retry after 1.019822632s: kubelet not initialised
	I0626 20:48:11.743554   46683 retry.go:31] will retry after 3.253867153s: kubelet not initialised
	I0626 20:48:10.552964   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.554534   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.520923   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.019384   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.036991   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:14.536734   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.002028   46683 retry.go:31] will retry after 2.234934883s: kubelet not initialised
	I0626 20:48:14.556223   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.053741   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.054276   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.021470   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.519794   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.036192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.036285   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:21.037136   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.242709   46683 retry.go:31] will retry after 6.079359776s: kubelet not initialised
	I0626 20:48:21.054851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.553653   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:22.020435   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:24.022102   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.037271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:25.037337   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.328332   46683 retry.go:31] will retry after 12.999865358s: kubelet not initialised
	I0626 20:48:25.553983   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.052253   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:26.518782   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.520217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:27.535792   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:29.536336   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:30.055419   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.553794   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:31.018773   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:33.020048   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:35.021492   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.036513   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:34.037364   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.535663   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.334795   46683 retry.go:31] will retry after 13.541680893s: kubelet not initialised
	I0626 20:48:35.052975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.053634   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.053672   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.519603   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.520279   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:38.536271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:40.536344   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.553411   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.554235   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.520569   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.522354   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:42.536811   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.035291   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.554795   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.053080   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:46.019919   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.021534   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:47.036908   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.537386   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.882566   46683 kubeadm.go:787] kubelet initialised
	I0626 20:48:49.882597   46683 kubeadm.go:788] duration metric: took 42.189498896s waiting for restarted kubelet to initialise ...
	I0626 20:48:49.882608   46683 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:48:49.888018   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894462   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.894488   46683 pod_ready.go:81] duration metric: took 6.438689ms waiting for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894501   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899336   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.899358   46683 pod_ready.go:81] duration metric: took 4.848554ms waiting for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899370   46683 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903574   46683 pod_ready.go:92] pod "etcd-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.903593   46683 pod_ready.go:81] duration metric: took 4.21548ms waiting for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903605   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908052   46683 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.908071   46683 pod_ready.go:81] duration metric: took 4.457812ms waiting for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908091   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281099   46683 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.281124   46683 pod_ready.go:81] duration metric: took 373.02512ms waiting for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281139   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681520   46683 pod_ready.go:92] pod "kube-proxy-64btm" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.681541   46683 pod_ready.go:81] duration metric: took 400.395983ms waiting for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681552   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081638   46683 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:51.081657   46683 pod_ready.go:81] duration metric: took 400.09969ms waiting for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081666   46683 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.053581   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.053802   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:50.520090   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.019821   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.020035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.037008   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.037516   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:56.037585   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.491534   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.989758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.552843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.054370   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.020770   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.520039   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.535930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.536377   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.488491   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.489659   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.552927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.056474   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:01.520560   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.019945   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.536728   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.537724   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.989651   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.989796   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.552707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.553918   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:08.554230   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.520608   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.020075   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:07.036576   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.537071   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.990147   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.489229   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.053576   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:13.054110   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.519744   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.020968   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:12.037949   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.537389   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.989856   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.488429   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.490529   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:15.553553   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.054036   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.519975   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.520288   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:17.036172   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:19.036248   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.036421   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.989943   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.990154   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.553570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.554626   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.020817   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.520602   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.036595   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.038742   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.990299   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:24.994358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.053465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.053635   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.520912   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:28.020413   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.537294   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.489707   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.990957   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.552847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:31.554360   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.052585   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:30.520207   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.521484   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:35.020064   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.035666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.036325   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.535889   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.489468   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.989668   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.556092   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.054617   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:37.519850   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:40.020217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.036499   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.537332   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.992357   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.489925   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.553528   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.052935   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:42.520450   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.520634   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.035299   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.036688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.990255   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.489449   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.553009   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.553560   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:47.018978   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.020289   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.535753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.536227   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.990710   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.490459   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.553710   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.054824   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.520532   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:54.027509   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:52.537108   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.036452   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.989608   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.990105   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.990610   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.552894   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.553520   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:56.519796   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.021401   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.537189   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.537365   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.991065   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.489396   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.053139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.062882   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:01.519625   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:03.520031   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.037036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.988698   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.991107   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.551742   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:06.553955   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.053612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:05.520676   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:08.019671   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:10.021418   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.035613   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.036666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.536861   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.488874   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.490059   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.492236   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.553481   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.054574   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:12.518824   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.519670   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.036399   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.537496   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:13.990228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.488219   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.054609   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.553511   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.519795   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.520535   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:19.037355   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.037964   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.488819   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:20.489536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.053521   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.553922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.021035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.519784   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.535974   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.536845   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:22.988574   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:24.990088   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:26.052017   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.054905   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.520011   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.019323   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.019500   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.537999   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.036187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.488859   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:29.990482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.551701   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.554272   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.019810   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.023728   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.036817   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.042849   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.536415   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.488492   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.491986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:35.053986   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:37.055115   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.520551   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.019307   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:38.537119   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:40.537474   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.991471   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.489241   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.490458   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.552836   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.553914   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:44.052850   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.020033   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.520646   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.036648   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:45.036959   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.990768   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.489482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.053271   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.553811   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.018851   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.021042   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.021254   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:47.536099   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.036995   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.489670   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.990231   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.554677   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.053841   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.520067   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.021727   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.042201   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:54.536260   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.489402   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.492509   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.055031   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.055181   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.521342   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.020905   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.036992   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.037534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:01.538152   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.993709   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.488776   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.555263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.054478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.519672   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:05.020878   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.036330   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.036424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.489742   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.988712   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.555161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.052680   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.055326   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.519641   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.520120   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.536306   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:10.537094   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.988973   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.989715   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.488986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.554973   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.054638   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.019264   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.020253   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.537126   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.037318   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:13.490053   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.988498   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.055193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:18.553665   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.522548   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.020609   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.536765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.037132   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.990230   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.991216   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.555044   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.055590   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:21.520052   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.520574   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:22.038085   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.535549   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.022544   47309 pod_ready.go:81] duration metric: took 4m0.000394525s waiting for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:25.022570   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:25.022598   47309 pod_ready.go:38] duration metric: took 4m12.221722724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:25.022623   47309 kubeadm.go:640] restartCluster took 4m31.561880232s
	W0626 20:51:25.022684   47309 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:25.022722   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:22.489438   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.490731   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.554637   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:27.555070   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.020700   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.520337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.990408   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.990900   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.490197   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:30.053627   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.041205   47605 pod_ready.go:81] duration metric: took 4m0.000945978s waiting for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:31.041235   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:31.041252   47605 pod_ready.go:38] duration metric: took 4m11.097608636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:31.041297   47605 kubeadm.go:640] restartCluster took 4m31.299321581s
	W0626 20:51:31.041365   47605 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:31.041409   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:31.019045   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.022453   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.492871   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.989984   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.520977   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:37.521128   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.021691   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:38.489349   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.989368   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.519812   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:44.520689   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.989461   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:45.491205   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:47.019936   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.506391   47779 pod_ready.go:81] duration metric: took 4m0.001048325s waiting for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:49.506423   47779 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:49.506441   47779 pod_ready.go:38] duration metric: took 4m7.651614118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:49.506483   47779 kubeadm.go:640] restartCluster took 4m26.997522391s
	W0626 20:51:49.506561   47779 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:49.506595   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:47.990134   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.990758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:52.489144   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:54.990008   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:56.650050   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.627303734s)
	I0626 20:51:56.650132   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:51:56.665246   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:51:56.678749   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:51:56.690413   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:51:56.690459   47309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:51:56.757308   47309 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:51:56.757415   47309 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:51:56.915845   47309 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:51:56.916021   47309 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:51:56.916158   47309 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:51:57.137465   47309 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:51:57.139330   47309 out.go:204]   - Generating certificates and keys ...
	I0626 20:51:57.139431   47309 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:51:57.139514   47309 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:51:57.139648   47309 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:51:57.139718   47309 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:51:57.139852   47309 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:51:57.139914   47309 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:51:57.139997   47309 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:51:57.140101   47309 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:51:57.140224   47309 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:51:57.140830   47309 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:51:57.141343   47309 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:51:57.141471   47309 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:51:57.294061   47309 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:51:57.436714   47309 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:51:57.707612   47309 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:51:57.875383   47309 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:51:57.893698   47309 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:51:57.895257   47309 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:51:57.895427   47309 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:51:58.020261   47309 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:51:58.022209   47309 out.go:204]   - Booting up control plane ...
	I0626 20:51:58.022349   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:51:58.023359   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:51:58.024253   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:51:58.026955   47309 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:51:58.032948   47309 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:51:57.489729   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:59.490578   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:01.491617   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:05.539291   47309 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505351 seconds
	I0626 20:52:05.539449   47309 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:05.564127   47309 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:06.097928   47309 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:06.098155   47309 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:06.617147   47309 kubeadm.go:322] [bootstrap-token] Using token: 7fs1fc.9teiyerfkduv7ctw
	I0626 20:52:03.989716   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.489773   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.618462   47309 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:06.618602   47309 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:06.631936   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:06.655354   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:06.662468   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:06.673817   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:06.680979   47309 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:06.717394   47309 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:07.015067   47309 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:07.079315   47309 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:07.079362   47309 kubeadm.go:322] 
	I0626 20:52:07.079450   47309 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:07.079464   47309 kubeadm.go:322] 
	I0626 20:52:07.079544   47309 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:07.079556   47309 kubeadm.go:322] 
	I0626 20:52:07.079597   47309 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:07.079680   47309 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:07.079765   47309 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:07.079782   47309 kubeadm.go:322] 
	I0626 20:52:07.079867   47309 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:07.079880   47309 kubeadm.go:322] 
	I0626 20:52:07.079960   47309 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:07.079971   47309 kubeadm.go:322] 
	I0626 20:52:07.080038   47309 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:07.080123   47309 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:07.080233   47309 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:07.080249   47309 kubeadm.go:322] 
	I0626 20:52:07.080370   47309 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:07.080467   47309 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:07.080481   47309 kubeadm.go:322] 
	I0626 20:52:07.080574   47309 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.080692   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:07.080738   47309 kubeadm.go:322] 	--control-plane 
	I0626 20:52:07.080756   47309 kubeadm.go:322] 
	I0626 20:52:07.080858   47309 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:07.080870   47309 kubeadm.go:322] 
	I0626 20:52:07.080979   47309 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.081124   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:07.082329   47309 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.082353   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:52:07.082369   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:07.084307   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:07.804074   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (36.762635025s)
	I0626 20:52:07.804158   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:07.819772   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:07.830166   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:07.839585   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:07.839633   47605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:08.061341   47605 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.085644   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:07.113105   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:07.158420   47309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:07.158542   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.158590   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=no-preload-934450 minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.637925   47309 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:07.638078   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.262589   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.762326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.262326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.762334   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.262485   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.762376   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:11.262645   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.490810   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:10.990521   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:11.762599   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.262690   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.762512   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.262844   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.762234   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.262587   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.762670   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.262293   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.763106   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:16.263264   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.991151   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:15.489549   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:19.659464   47605 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:19.659534   47605 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:19.659620   47605 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:19.659793   47605 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:19.659913   47605 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:19.659993   47605 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:19.661681   47605 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:19.661770   47605 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:19.661860   47605 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:19.661969   47605 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:19.662065   47605 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:19.662158   47605 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:19.662226   47605 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:19.662321   47605 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:19.662401   47605 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:19.662487   47605 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:19.662595   47605 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:19.662649   47605 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:19.662717   47605 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:19.662779   47605 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:19.662849   47605 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:19.662928   47605 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:19.663014   47605 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:19.663128   47605 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:19.663231   47605 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:19.663286   47605 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:19.663370   47605 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:19.664951   47605 out.go:204]   - Booting up control plane ...
	I0626 20:52:19.665063   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:19.665157   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:19.665246   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:19.665347   47605 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:19.665554   47605 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:19.665662   47605 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504998 seconds
	I0626 20:52:19.665792   47605 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:19.665948   47605 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:19.666027   47605 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:19.666278   47605 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-299839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:19.666360   47605 kubeadm.go:322] [bootstrap-token] Using token: e53kqf.6hnw5p7blg3e1mpb
	I0626 20:52:19.667988   47605 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:19.668104   47605 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:19.668203   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:19.668357   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:19.668500   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:19.668632   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:19.668732   47605 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:19.668890   47605 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:19.668953   47605 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:19.669024   47605 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:19.669042   47605 kubeadm.go:322] 
	I0626 20:52:19.669122   47605 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:19.669135   47605 kubeadm.go:322] 
	I0626 20:52:19.669243   47605 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:19.669253   47605 kubeadm.go:322] 
	I0626 20:52:19.669284   47605 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:19.669392   47605 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:19.669472   47605 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:19.669483   47605 kubeadm.go:322] 
	I0626 20:52:19.669561   47605 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:19.669571   47605 kubeadm.go:322] 
	I0626 20:52:19.669642   47605 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:19.669661   47605 kubeadm.go:322] 
	I0626 20:52:19.669724   47605 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:19.669831   47605 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:19.669941   47605 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:19.669951   47605 kubeadm.go:322] 
	I0626 20:52:19.670055   47605 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:19.670169   47605 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:19.670179   47605 kubeadm.go:322] 
	I0626 20:52:19.670283   47605 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670428   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:19.670469   47605 kubeadm.go:322] 	--control-plane 
	I0626 20:52:19.670484   47605 kubeadm.go:322] 
	I0626 20:52:19.670588   47605 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:19.670603   47605 kubeadm.go:322] 
	I0626 20:52:19.670715   47605 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670850   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:19.670863   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:52:19.670875   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:19.672750   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:16.762961   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.263008   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.762325   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.262618   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.762659   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.262343   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.763023   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.932557   47309 kubeadm.go:1081] duration metric: took 12.774065652s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:19.932647   47309 kubeadm.go:406] StartCluster complete in 5m26.514862376s
	I0626 20:52:19.932687   47309 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.932796   47309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:19.935445   47309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.935820   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:19.936149   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:19.936267   47309 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:19.936369   47309 addons.go:66] Setting storage-provisioner=true in profile "no-preload-934450"
	I0626 20:52:19.936388   47309 addons.go:228] Setting addon storage-provisioner=true in "no-preload-934450"
	W0626 20:52:19.936396   47309 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:19.936453   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.936890   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.936917   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.936996   47309 addons.go:66] Setting default-storageclass=true in profile "no-preload-934450"
	I0626 20:52:19.937022   47309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934450"
	I0626 20:52:19.937178   47309 addons.go:66] Setting metrics-server=true in profile "no-preload-934450"
	I0626 20:52:19.937198   47309 addons.go:228] Setting addon metrics-server=true in "no-preload-934450"
	W0626 20:52:19.937206   47309 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:19.937259   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.937461   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937485   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.937664   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937686   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.956754   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0626 20:52:19.956777   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0626 20:52:19.956923   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0626 20:52:19.957245   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957327   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957473   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957897   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.957918   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958063   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958078   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958217   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958240   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958385   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959001   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.959029   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.959257   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959323   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959523   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.960115   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.960168   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.980739   47309 addons.go:228] Setting addon default-storageclass=true in "no-preload-934450"
	W0626 20:52:19.980887   47309 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:19.980924   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.981308   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.981348   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.982528   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0626 20:52:19.982768   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0626 20:52:19.983398   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984115   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984291   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.984303   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.984767   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985276   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.985294   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.985346   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.985720   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985919   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.987605   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.989810   47309 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:19.991208   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:19.991229   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:19.991248   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:19.989487   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.997528   47309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:19.996110   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:19.996135   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999411   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:19.999436   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999495   47309 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:19.999511   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:19.999532   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.002886   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.003159   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.003321   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.004492   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.004806   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0626 20:52:20.004991   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.005018   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.005189   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.005234   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.005402   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.005568   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.005703   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.005881   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.005899   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.006233   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.006590   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:20.006614   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:20.022796   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0626 20:52:20.023252   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.023827   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.023852   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.024209   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.024425   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:20.026279   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:20.026527   47309 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.026542   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:20.026559   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.029302   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029775   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.029804   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029944   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.030138   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.030321   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.030454   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.331846   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.341298   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:20.352664   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:20.352693   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:20.376961   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:20.420573   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:20.420599   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:20.495388   47309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934450" context rescaled to 1 replicas
	I0626 20:52:20.495436   47309 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:20.497711   47309 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:20.499512   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:20.560528   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:20.560559   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:20.647734   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:21.308936   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.802312904s)
	I0626 20:52:21.309013   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:21.323340   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:21.333741   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:21.346686   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:21.346741   47779 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:21.427299   47779 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:21.427431   47779 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:21.598474   47779 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:21.598609   47779 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:21.598727   47779 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:21.802443   47779 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:17.989506   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:20.002885   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:21.804179   47779 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:21.804277   47779 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:21.804985   47779 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:21.805576   47779 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:21.806465   47779 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:21.807206   47779 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:21.807988   47779 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:21.808775   47779 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:21.809427   47779 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:21.810136   47779 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:21.810809   47779 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:21.811489   47779 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:21.811563   47779 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:22.127084   47779 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:22.371731   47779 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:22.635165   47779 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:22.843347   47779 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:22.866673   47779 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:22.868080   47779 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:22.868259   47779 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:23.015798   47779 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:22.468922   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.137025983s)
	I0626 20:52:22.468974   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.468988   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469285   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469339   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469359   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469390   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469315   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:22.469630   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469649   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469669   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469678   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469900   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469915   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597030   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.255690675s)
	I0626 20:52:23.597078   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.220078989s)
	I0626 20:52:23.597104   47309 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:23.597084   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597131   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597130   47309 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.097584802s)
	I0626 20:52:23.597162   47309 node_ready.go:35] waiting up to 6m0s for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.597463   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597463   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597489   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597499   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597508   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597879   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597931   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597950   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632416   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.984627683s)
	I0626 20:52:23.632472   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632485   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.632907   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.632919   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.632940   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632967   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632982   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.633279   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.633297   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.633307   47309 addons.go:464] Verifying addon metrics-server=true in "no-preload-934450"
	I0626 20:52:23.633353   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.635198   47309 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:19.674407   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:19.702224   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:19.744577   47605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=embed-certs-299839 minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.783628   47605 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:20.149671   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:20.782659   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.283295   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.782574   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.283137   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.782766   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.282641   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.783459   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.017432   47779 out.go:204]   - Booting up control plane ...
	I0626 20:52:23.017573   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:23.019187   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:23.020097   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:23.023559   47779 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:23.025808   47779 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:23.636740   47309 addons.go:499] enable addons completed in 3.700468963s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:23.637657   47309 node_ready.go:49] node "no-preload-934450" has status "Ready":"True"
	I0626 20:52:23.637673   47309 node_ready.go:38] duration metric: took 40.495678ms waiting for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.637684   47309 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:23.676466   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:25.699614   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:22.489080   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.490209   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.282506   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:24.782560   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.282565   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.783022   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.282856   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.783243   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.282657   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.783258   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.282802   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.783019   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.283285   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.782968   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.282489   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.782763   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.283126   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.445729   47605 kubeadm.go:1081] duration metric: took 11.701128618s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:31.445766   47605 kubeadm.go:406] StartCluster complete in 5m31.748710798s
	I0626 20:52:31.445787   47605 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.445873   47605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:31.448427   47605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.448700   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:31.448792   47605 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:31.448866   47605 addons.go:66] Setting storage-provisioner=true in profile "embed-certs-299839"
	I0626 20:52:31.448871   47605 addons.go:66] Setting default-storageclass=true in profile "embed-certs-299839"
	I0626 20:52:31.448884   47605 addons.go:228] Setting addon storage-provisioner=true in "embed-certs-299839"
	I0626 20:52:31.448885   47605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-299839"
	W0626 20:52:31.448892   47605 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:31.448938   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:31.448948   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.448986   47605 addons.go:66] Setting metrics-server=true in profile "embed-certs-299839"
	I0626 20:52:31.449006   47605 addons.go:228] Setting addon metrics-server=true in "embed-certs-299839"
	W0626 20:52:31.449013   47605 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:31.449053   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449762   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450455   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450635   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.450708   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.468787   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0626 20:52:31.469015   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0626 20:52:31.469401   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469497   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469929   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.469947   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470036   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.470073   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470548   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470605   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470723   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0626 20:52:31.470915   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.471202   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.471236   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.471374   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.471846   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.471871   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.481862   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.482471   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.482499   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.492391   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0626 20:52:31.493190   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.493807   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.493833   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.494190   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.494347   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.496376   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.499801   47605 addons.go:228] Setting addon default-storageclass=true in "embed-certs-299839"
	W0626 20:52:31.499822   47605 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:31.499851   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.500224   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.500253   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.506027   47605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:31.507267   47605 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.507286   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:31.507306   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.507954   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0626 20:52:31.508919   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.509350   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.509364   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.509784   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.510070   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.511452   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.513168   47605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:28.196489   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:30.196782   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:26.989644   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:29.488966   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.506536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.511805   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.512430   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.514510   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.514522   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:31.514530   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.514536   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:31.514555   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.514709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.514860   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.515029   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.517249   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517628   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.517653   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517774   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.517948   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.518282   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.518454   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.522114   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0626 20:52:31.522433   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.522982   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.523010   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.523416   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.523984   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.524019   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.545037   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0626 20:52:31.545523   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.546109   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.546140   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.546551   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.546826   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.549289   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.549597   47605 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.549618   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:31.549638   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.553457   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553713   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.553744   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553798   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.553995   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.554131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.554284   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.693230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:31.713818   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.718654   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:31.718682   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:31.734681   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.767394   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:31.767424   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:31.884424   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:31.884443   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:31.961893   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:32.055887   47605 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-299839" context rescaled to 1 replicas
	I0626 20:52:32.055933   47605 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:32.058697   47605 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:32.530480   47779 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.504525 seconds
	I0626 20:52:32.530633   47779 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:32.556112   47779 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:33.096104   47779 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:33.096372   47779 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-473235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:33.615425   47779 kubeadm.go:322] [bootstrap-token] Using token: fvy9dh.hbeabw0ufqdnf2rd
	I0626 20:52:33.617480   47779 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:33.617622   47779 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:33.630158   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:33.641973   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:33.649480   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:33.657736   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:33.663093   47779 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:33.698108   47779 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:34.017843   47779 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:34.069498   47779 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:34.070500   47779 kubeadm.go:322] 
	I0626 20:52:34.070587   47779 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:34.070600   47779 kubeadm.go:322] 
	I0626 20:52:34.070691   47779 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:34.070705   47779 kubeadm.go:322] 
	I0626 20:52:34.070734   47779 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:34.070809   47779 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:34.070915   47779 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:34.070952   47779 kubeadm.go:322] 
	I0626 20:52:34.071047   47779 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:34.071060   47779 kubeadm.go:322] 
	I0626 20:52:34.071114   47779 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:34.071124   47779 kubeadm.go:322] 
	I0626 20:52:34.071183   47779 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:34.071276   47779 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:34.071360   47779 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:34.071369   47779 kubeadm.go:322] 
	I0626 20:52:34.071454   47779 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:34.071550   47779 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:34.071558   47779 kubeadm.go:322] 
	I0626 20:52:34.071677   47779 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.071823   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:34.071852   47779 kubeadm.go:322] 	--control-plane 
	I0626 20:52:34.071860   47779 kubeadm.go:322] 
	I0626 20:52:34.071961   47779 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:34.071973   47779 kubeadm.go:322] 
	I0626 20:52:34.072075   47779 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.072202   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:34.072734   47779 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:34.072775   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:52:34.072794   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:34.074659   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:32.060653   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:33.969636   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.276366101s)
	I0626 20:52:33.969679   47605 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:34.114443   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.400580422s)
	I0626 20:52:34.114587   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114636   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114483   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.379765696s)
	I0626 20:52:34.114695   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114993   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115036   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115049   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.115059   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.115068   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.115386   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115394   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115458   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117682   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.117720   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.117736   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117754   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.117764   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.119184   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.119204   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.119218   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.119238   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.119253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.120750   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.120787   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.120800   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.800635   47605 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.739945617s)
	I0626 20:52:34.800672   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838732117s)
	I0626 20:52:34.800721   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.800740   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.800674   47605 node_ready.go:35] waiting up to 6m0s for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.801059   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.801086   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.801103   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.801112   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.802733   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.802767   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.802781   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.802798   47605 addons.go:464] Verifying addon metrics-server=true in "embed-certs-299839"
	I0626 20:52:34.804616   47605 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:52:34.076233   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:34.097578   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:34.126294   47779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:34.126351   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.126361   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=default-k8s-diff-port-473235 minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.672738   47779 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:34.672886   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:32.196979   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.198202   47309 pod_ready.go:97] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198243   47309 pod_ready.go:81] duration metric: took 10.521748073s waiting for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:34.198256   47309 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198265   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208718   47309 pod_ready.go:92] pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.208751   47309 pod_ready.go:81] duration metric: took 10.474456ms waiting for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208765   47309 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216757   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.216787   47309 pod_ready.go:81] duration metric: took 8.014039ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216800   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226840   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.226862   47309 pod_ready.go:81] duration metric: took 10.054474ms waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226875   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234229   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.234252   47309 pod_ready.go:81] duration metric: took 7.369366ms waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234265   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603958   47309 pod_ready.go:92] pod "kube-proxy-jhz99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.603985   47309 pod_ready.go:81] duration metric: took 369.712585ms waiting for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603999   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.992990   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.993018   47309 pod_ready.go:81] duration metric: took 389.011206ms waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.993033   47309 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:33.991358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:36.489561   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.806005   47605 addons.go:499] enable addons completed in 3.357208024s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:52:34.826098   47605 node_ready.go:49] node "embed-certs-299839" has status "Ready":"True"
	I0626 20:52:34.826123   47605 node_ready.go:38] duration metric: took 25.328707ms waiting for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.826131   47605 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:34.853293   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388894   47605 pod_ready.go:92] pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.388921   47605 pod_ready.go:81] duration metric: took 1.535604079s waiting for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388931   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397936   47605 pod_ready.go:92] pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.397962   47605 pod_ready.go:81] duration metric: took 9.024703ms waiting for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397978   47605 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409066   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.409098   47605 pod_ready.go:81] duration metric: took 11.112746ms waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409111   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419292   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.419313   47605 pod_ready.go:81] duration metric: took 10.193966ms waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419322   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429116   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.429140   47605 pod_ready.go:81] duration metric: took 9.812044ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429154   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316268   47605 pod_ready.go:92] pod "kube-proxy-scfwr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.316318   47605 pod_ready.go:81] duration metric: took 887.155494ms waiting for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316334   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605351   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.605394   47605 pod_ready.go:81] duration metric: took 289.052198ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605409   47605 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:35.287764   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:35.787902   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.287089   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.786922   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.287932   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.787255   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.287820   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.786891   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.287467   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.787282   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.400022   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:39.401566   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:41.404969   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:38.491696   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.990293   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.013927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:42.518436   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.287734   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:40.786949   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.287187   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.787722   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.287098   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.787623   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.287242   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.787224   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.287339   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.787760   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.287273   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.787052   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.287810   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.436665   47779 kubeadm.go:1081] duration metric: took 12.310369141s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:46.436696   47779 kubeadm.go:406] StartCluster complete in 5m23.972219662s
	I0626 20:52:46.436715   47779 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.436798   47779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:46.438623   47779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.438897   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:46.439016   47779 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:46.439110   47779 addons.go:66] Setting storage-provisioner=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439117   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:46.439128   47779 addons.go:66] Setting default-storageclass=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439166   47779 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-473235"
	I0626 20:52:46.439128   47779 addons.go:228] Setting addon storage-provisioner=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439240   47779 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:46.439285   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439133   47779 addons.go:66] Setting metrics-server=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439336   47779 addons.go:228] Setting addon metrics-server=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439346   47779 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:46.439383   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439663   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439691   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439694   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439717   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439733   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439754   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.456038   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0626 20:52:46.456227   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0626 20:52:46.456533   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.456769   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.457072   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457092   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457258   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457280   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457413   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457749   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457902   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.459751   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0626 20:52:46.460296   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.460326   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.460688   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.462951   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.462975   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.463384   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.463981   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.464006   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.477368   47779 addons.go:228] Setting addon default-storageclass=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.477472   47779 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:46.477516   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.477987   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.478063   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.479865   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0626 20:52:46.480358   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.480932   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.480951   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.481335   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.482608   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0626 20:52:46.482630   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.482982   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.483505   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.483521   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.483907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.484123   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.485234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.487634   47779 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:46.486430   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.488916   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:46.488938   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:46.488959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.490698   47779 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:43.900514   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.900540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:43.488701   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.992735   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:46.491860   47779 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.491875   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:46.491893   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.492950   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.493834   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.493855   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.494361   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.494827   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.494987   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.495130   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.496109   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.496170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496192   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.496213   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496294   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.496444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.496549   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.502119   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0626 20:52:46.502456   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.502898   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.502916   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.503203   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.503723   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.503747   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.522597   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0626 20:52:46.523240   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.523892   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.523912   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.524423   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.524674   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.526567   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.528682   47779 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.528699   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:46.528721   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.531983   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532450   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.532477   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532785   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.533992   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.534158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.534302   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.698636   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:46.819666   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.915074   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.918133   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:46.918161   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:47.006856   47779 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-473235" context rescaled to 1 replicas
	I0626 20:52:47.006907   47779 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:47.008746   47779 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:45.013051   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.014722   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.010273   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:47.015003   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:47.015022   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:47.099554   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:47.099583   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:47.162192   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:48.848078   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.149396252s)
	I0626 20:52:48.848110   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.028412306s)
	I0626 20:52:48.848145   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848157   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848112   47779 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:48.848418   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848438   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848440   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848448   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848460   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848678   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848699   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848712   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848715   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848722   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848936   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848959   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.142482   47779 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.13217662s)
	I0626 20:52:49.142522   47779 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.142664   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.227563186s)
	I0626 20:52:49.142706   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.142723   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143018   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143037   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143047   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.143055   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.143309   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143402   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143378   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.230635   47779 node_ready.go:49] node "default-k8s-diff-port-473235" has status "Ready":"True"
	I0626 20:52:49.230663   47779 node_ready.go:38] duration metric: took 88.12938ms waiting for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.230688   47779 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:49.248094   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:49.857182   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694948259s)
	I0626 20:52:49.857243   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857254   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857552   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857569   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857579   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857588   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857816   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857836   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857847   47779 addons.go:464] Verifying addon metrics-server=true in "default-k8s-diff-port-473235"
	I0626 20:52:49.859648   47779 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:49.860902   47779 addons.go:499] enable addons completed in 3.421885216s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:47.901422   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.402347   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:48.490248   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.991228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.082154   46683 pod_ready.go:81] duration metric: took 4m0.000473504s waiting for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:51.082180   46683 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:52:51.082198   46683 pod_ready.go:38] duration metric: took 4m1.199581008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:51.082227   46683 kubeadm.go:640] restartCluster took 5m4.421255564s
	W0626 20:52:51.082286   46683 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:52:51.082319   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:52:50.897742   47779 pod_ready.go:92] pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.897765   47779 pod_ready.go:81] duration metric: took 1.649649958s waiting for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.897777   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.924988   47779 pod_ready.go:92] pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.925007   47779 pod_ready.go:81] duration metric: took 27.222965ms waiting for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.925016   47779 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942760   47779 pod_ready.go:92] pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.942781   47779 pod_ready.go:81] duration metric: took 17.75819ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942790   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956204   47779 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.956224   47779 pod_ready.go:81] duration metric: took 13.428405ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956235   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964542   47779 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.964569   47779 pod_ready.go:81] duration metric: took 8.32705ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964581   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791355   47779 pod_ready.go:92] pod "kube-proxy-k4hzc" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:51.791376   47779 pod_ready.go:81] duration metric: took 826.787812ms waiting for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791384   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078670   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:52.078700   47779 pod_ready.go:81] duration metric: took 287.306474ms waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078714   47779 pod_ready.go:38] duration metric: took 2.848014299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:52.078733   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:52:52.078789   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:52:52.094414   47779 api_server.go:72] duration metric: took 5.08747775s to wait for apiserver process to appear ...
	I0626 20:52:52.094444   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:52:52.094468   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:52:52.101300   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:52:52.102682   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:52:52.102703   47779 api_server.go:131] duration metric: took 8.250707ms to wait for apiserver health ...
	I0626 20:52:52.102712   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:52:52.283428   47779 system_pods.go:59] 9 kube-system pods found
	I0626 20:52:52.283459   47779 system_pods.go:61] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.283467   47779 system_pods.go:61] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.283474   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.283482   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.283488   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.283493   47779 system_pods.go:61] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.283500   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.283511   47779 system_pods.go:61] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.283519   47779 system_pods.go:61] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.283527   47779 system_pods.go:74] duration metric: took 180.810034ms to wait for pod list to return data ...
	I0626 20:52:52.283540   47779 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:52:52.478374   47779 default_sa.go:45] found service account: "default"
	I0626 20:52:52.478400   47779 default_sa.go:55] duration metric: took 194.853163ms for default service account to be created ...
	I0626 20:52:52.478418   47779 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:52:52.683697   47779 system_pods.go:86] 9 kube-system pods found
	I0626 20:52:52.683724   47779 system_pods.go:89] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.683730   47779 system_pods.go:89] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.683735   47779 system_pods.go:89] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.683740   47779 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.683745   47779 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.683748   47779 system_pods.go:89] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.683752   47779 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.683761   47779 system_pods.go:89] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.683773   47779 system_pods.go:89] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.683789   47779 system_pods.go:126] duration metric: took 205.364587ms to wait for k8s-apps to be running ...
	I0626 20:52:52.683798   47779 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:52:52.683846   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:52.698439   47779 system_svc.go:56] duration metric: took 14.634482ms WaitForService to wait for kubelet.
	I0626 20:52:52.698463   47779 kubeadm.go:581] duration metric: took 5.691531199s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:52:52.698480   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:52:52.879414   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:52:52.879441   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:52:52.879454   47779 node_conditions.go:105] duration metric: took 180.969761ms to run NodePressure ...
	I0626 20:52:52.879466   47779 start.go:228] waiting for startup goroutines ...
	I0626 20:52:52.879473   47779 start.go:233] waiting for cluster config update ...
	I0626 20:52:52.879484   47779 start.go:242] writing updated cluster config ...
	I0626 20:52:52.879748   47779 ssh_runner.go:195] Run: rm -f paused
	I0626 20:52:52.928982   47779 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:52:52.930701   47779 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-473235" cluster and "default" namespace by default
	I0626 20:52:49.513843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.515851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:54.013443   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:52.901965   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:55.400541   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:56.014186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:58.516445   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:57.900857   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:59.901944   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:01.013089   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:03.015510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:02.400534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:04.400691   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:06.401897   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:05.513529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.013510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.901751   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:11.400891   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:10.513562   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:12.515529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:13.900503   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:15.900570   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:14.208647   46683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.126299276s)
	I0626 20:53:14.208727   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:14.222919   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:53:14.234762   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:53:14.244800   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:53:14.244840   46683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0626 20:53:14.465786   46683 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:53:15.014781   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.017400   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.901367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:20.401697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:19.515459   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.015763   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.900407   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:24.901270   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.255771   46683 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0626 20:53:27.255867   46683 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:53:27.255968   46683 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:53:27.256115   46683 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:53:27.256237   46683 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:53:27.256368   46683 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:53:27.256489   46683 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:53:27.256550   46683 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0626 20:53:27.256604   46683 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:53:27.258050   46683 out.go:204]   - Generating certificates and keys ...
	I0626 20:53:27.258140   46683 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:53:27.258235   46683 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:53:27.258357   46683 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:53:27.258441   46683 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:53:27.258554   46683 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:53:27.258611   46683 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:53:27.258665   46683 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:53:27.258737   46683 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:53:27.258832   46683 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:53:27.258907   46683 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:53:27.258954   46683 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:53:27.259034   46683 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:53:27.259106   46683 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:53:27.259170   46683 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:53:27.259247   46683 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:53:27.259325   46683 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:53:27.259410   46683 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:53:27.260969   46683 out.go:204]   - Booting up control plane ...
	I0626 20:53:27.261074   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:53:27.261181   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:53:27.261257   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:53:27.261341   46683 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:53:27.261496   46683 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:53:27.261599   46683 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003012 seconds
	I0626 20:53:27.261709   46683 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:53:27.261854   46683 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:53:27.261940   46683 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:53:27.262112   46683 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-490377 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 20:53:27.262210   46683 kubeadm.go:322] [bootstrap-token] Using token: 9pdj92.0ssfpvr0ns0ww3t3
	I0626 20:53:27.263670   46683 out.go:204]   - Configuring RBAC rules ...
	I0626 20:53:27.263769   46683 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:53:27.263903   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:53:27.264029   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:53:27.264172   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:53:27.264278   46683 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:53:27.264333   46683 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:53:27.264372   46683 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:53:27.264379   46683 kubeadm.go:322] 
	I0626 20:53:27.264445   46683 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:53:27.264454   46683 kubeadm.go:322] 
	I0626 20:53:27.264557   46683 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:53:27.264568   46683 kubeadm.go:322] 
	I0626 20:53:27.264598   46683 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:53:27.264668   46683 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:53:27.264715   46683 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:53:27.264721   46683 kubeadm.go:322] 
	I0626 20:53:27.264769   46683 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:53:27.264846   46683 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:53:27.264934   46683 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:53:27.264943   46683 kubeadm.go:322] 
	I0626 20:53:27.265038   46683 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0626 20:53:27.265101   46683 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:53:27.265107   46683 kubeadm.go:322] 
	I0626 20:53:27.265171   46683 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265269   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:53:27.265292   46683 kubeadm.go:322]     --control-plane 	  
	I0626 20:53:27.265298   46683 kubeadm.go:322] 
	I0626 20:53:27.265439   46683 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:53:27.265451   46683 kubeadm.go:322] 
	I0626 20:53:27.265581   46683 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265739   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:53:27.265753   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:53:27.265765   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:53:27.267293   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:53:24.515093   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.014403   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.401630   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:29.404203   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.268439   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:53:27.281135   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:53:27.304145   46683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:53:27.304275   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=old-k8s-version-490377 minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.304277   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.555789   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.571040   46683 ops.go:34] apiserver oom_adj: -16
	I0626 20:53:28.180843   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:28.681089   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.180441   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.680355   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.180860   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.680971   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.181088   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.680352   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.516069   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.013135   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.013391   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:31.901777   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.400314   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:36.400967   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.180338   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:32.680389   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.180568   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.681010   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.180575   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.680905   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.180640   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.680412   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.181081   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.680836   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.514263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:39.013193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:38.900309   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:40.900622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:37.181178   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:37.680710   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.180280   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.680304   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.681177   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.180431   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.681031   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.180847   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.681058   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.680883   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.800538   46683 kubeadm.go:1081] duration metric: took 15.496322508s to wait for elevateKubeSystemPrivileges.
	I0626 20:53:42.800568   46683 kubeadm.go:406] StartCluster complete in 5m56.189450192s
	I0626 20:53:42.800584   46683 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.800661   46683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:53:42.802530   46683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.802755   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:53:42.802810   46683 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:53:42.802908   46683 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802926   46683 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-490377"
	W0626 20:53:42.802936   46683 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:53:42.802934   46683 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802953   46683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-490377"
	I0626 20:53:42.802972   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:53:42.802983   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.802974   46683 addons.go:66] Setting metrics-server=true in profile "old-k8s-version-490377"
	I0626 20:53:42.803034   46683 addons.go:228] Setting addon metrics-server=true in "old-k8s-version-490377"
	W0626 20:53:42.803048   46683 addons.go:237] addon metrics-server should already be in state true
	I0626 20:53:42.803155   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.803353   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803394   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803437   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803468   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803563   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803607   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.822676   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0626 20:53:42.822891   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0626 20:53:42.823127   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823221   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0626 20:53:42.823284   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823599   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823763   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823771   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823783   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.823790   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824056   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.824082   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824096   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824141   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824310   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.824408   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824656   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824682   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.824924   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824954   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.839635   46683 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-490377"
	W0626 20:53:42.839655   46683 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:53:42.839675   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.840131   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.840171   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.846479   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0626 20:53:42.847180   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.847711   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.847728   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.848194   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.848454   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.848519   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0626 20:53:42.850321   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.850427   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.852331   46683 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:53:42.851252   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.853522   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.853581   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:53:42.853603   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:53:42.853625   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.854082   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.854292   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.856641   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.858158   46683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:53:42.857809   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.859467   46683 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:42.859485   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:53:42.859500   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.859505   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.859528   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.858223   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.858466   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0626 20:53:42.860179   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.860331   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.860421   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.860783   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.860909   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.860923   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.861642   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.862199   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.862246   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.863700   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864103   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.864124   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864413   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.864598   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.864737   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.864867   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.878470   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0626 20:53:42.878961   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.879500   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.879510   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.879860   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.880063   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.881757   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.882028   46683 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:42.882040   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:53:42.882054   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.887689   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.887749   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.887779   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887888   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.888058   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.888203   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.981495   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:53:43.064530   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:53:43.064554   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:53:43.074105   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:43.091876   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:43.132074   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:53:43.132095   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:53:43.219103   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.219133   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:53:43.285081   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.443796   46683 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-490377" context rescaled to 1 replicas
	I0626 20:53:43.443841   46683 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:53:43.445639   46683 out.go:177] * Verifying Kubernetes components...
	I0626 20:53:41.014279   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.515278   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.447458   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:43.642242   46683 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0626 20:53:44.194901   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.102988033s)
	I0626 20:53:44.194990   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195008   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.194932   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120793889s)
	I0626 20:53:44.195085   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195096   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195452   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195466   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195475   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195486   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195493   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195518   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195529   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195714   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195765   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195774   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195816   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195893   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195905   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195922   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195936   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.196171   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.196190   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.196197   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.260680   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.260703   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.260706   46683 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.261103   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261122   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261134   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.261144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.261146   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.261364   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261386   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261396   46683 addons.go:464] Verifying addon metrics-server=true in "old-k8s-version-490377"
	I0626 20:53:44.262936   46683 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:53:42.901604   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.902182   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.264049   46683 addons.go:499] enable addons completed in 1.461244367s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:53:44.318103   46683 node_ready.go:49] node "old-k8s-version-490377" has status "Ready":"True"
	I0626 20:53:44.318135   46683 node_ready.go:38] duration metric: took 57.40895ms waiting for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.318147   46683 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:44.333409   46683 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:46.345926   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:46.015128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.516066   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:47.400802   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:49.901066   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.347529   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:50.847639   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:51.012404   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.012697   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:52.400326   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:54.400932   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.402434   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.345907   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:55.345824   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.345850   46683 pod_ready.go:81] duration metric: took 11.012408828s waiting for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.345858   46683 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350198   46683 pod_ready.go:92] pod "kube-proxy-m7hz7" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.350214   46683 pod_ready.go:81] duration metric: took 4.351274ms waiting for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350222   46683 pod_ready.go:38] duration metric: took 11.032065043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:55.350236   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:53:55.350285   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:53:55.366478   46683 api_server.go:72] duration metric: took 11.922600619s to wait for apiserver process to appear ...
	I0626 20:53:55.366501   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:53:55.366518   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:53:55.373257   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:53:55.374362   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:53:55.374382   46683 api_server.go:131] duration metric: took 7.874169ms to wait for apiserver health ...
	I0626 20:53:55.374390   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:53:55.377704   46683 system_pods.go:59] 4 kube-system pods found
	I0626 20:53:55.377719   46683 system_pods.go:61] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.377724   46683 system_pods.go:61] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.377744   46683 system_pods.go:61] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.377754   46683 system_pods.go:61] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.377759   46683 system_pods.go:74] duration metric: took 3.35753ms to wait for pod list to return data ...
	I0626 20:53:55.377765   46683 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:53:55.379628   46683 default_sa.go:45] found service account: "default"
	I0626 20:53:55.379641   46683 default_sa.go:55] duration metric: took 1.87263ms for default service account to be created ...
	I0626 20:53:55.379647   46683 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:53:55.382155   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.382171   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.382176   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.382183   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.382189   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.382204   46683 retry.go:31] will retry after 310.903974ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.698587   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.698613   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.698618   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.698625   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.698631   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.698646   46683 retry.go:31] will retry after 300.100433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.005356   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.005397   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.005408   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.005419   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.005427   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.005446   46683 retry.go:31] will retry after 407.352435ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.417879   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.417905   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.417910   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.417916   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.417922   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.417935   46683 retry.go:31] will retry after 483.508514ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.013247   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:57.015631   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:58.900650   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.401491   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.906260   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.906282   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.906287   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.906293   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.906301   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.906319   46683 retry.go:31] will retry after 527.167542ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:57.438949   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:57.438985   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:57.438995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:57.439006   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:57.439019   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:57.439038   46683 retry.go:31] will retry after 902.255612ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:58.346131   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:58.346161   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:58.346166   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:58.346173   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:58.346179   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:58.346192   46683 retry.go:31] will retry after 904.271086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.256458   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:59.256489   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:59.256497   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:59.256509   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:59.256517   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:59.256534   46683 retry.go:31] will retry after 1.069634228s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:00.331828   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:00.331858   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:00.331865   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:00.331873   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:00.331879   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:00.331896   46683 retry.go:31] will retry after 1.418598639s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:01.755104   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:01.755131   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:01.755136   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:01.755143   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:01.755149   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:01.755162   46683 retry.go:31] will retry after 1.624135654s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.514847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.515086   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.900425   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:05.900854   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.385085   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:03.385111   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:03.385116   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:03.385122   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:03.385128   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:03.385142   46683 retry.go:31] will retry after 1.861818901s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:05.251844   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:05.251870   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:05.251875   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:05.251882   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:05.251888   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:05.251901   46683 retry.go:31] will retry after 3.23679019s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:06.013786   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.514493   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.399542   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:10.400928   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.494644   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:08.494669   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:08.494674   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:08.494681   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:08.494687   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:08.494700   46683 retry.go:31] will retry after 4.210335189s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:10.514707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.515079   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.415273   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:14.899807   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.709730   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:12.709754   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:12.709759   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:12.709765   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:12.709771   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:12.709785   46683 retry.go:31] will retry after 4.208864521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:15.012766   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:17.012807   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.014851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.901107   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.400540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:21.402204   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.923625   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:16.923654   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:16.923662   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:16.923673   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:16.923682   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:16.923701   46683 retry.go:31] will retry after 6.417296046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:21.514829   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.515117   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.402546   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:25.903195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.347074   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:23.347099   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:23.347105   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Pending
	I0626 20:54:23.347108   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:23.347115   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:23.347121   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:23.347133   46683 retry.go:31] will retry after 7.108155838s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:26.013263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.013708   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.399697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.401036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.460927   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:30.460950   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:30.460955   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:30.460995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:30.461004   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:30.461014   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:30.461027   46683 retry.go:31] will retry after 9.756193162s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:30.514139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.514334   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:34.901064   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:35.013362   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.013815   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.014126   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.400945   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:40.223985   46683 system_pods.go:86] 7 kube-system pods found
	I0626 20:54:40.224009   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:40.224014   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Pending
	I0626 20:54:40.224018   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Pending
	I0626 20:54:40.224022   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:40.224026   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:40.224032   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:40.224037   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:40.224052   46683 retry.go:31] will retry after 8.963386657s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:41.515388   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:44.015053   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:41.900424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:43.901263   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.400098   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.514128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.013743   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.195390   46683 system_pods.go:86] 8 kube-system pods found
	I0626 20:54:49.195416   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:49.195421   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Running
	I0626 20:54:49.195426   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Running
	I0626 20:54:49.195430   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:49.195434   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:49.195438   46683 system_pods.go:89] "kube-scheduler-old-k8s-version-490377" [c6fe04b8-d037-452b-bf41-3719c032b7ef] Running
	I0626 20:54:49.195444   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:49.195450   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:49.195458   46683 system_pods.go:126] duration metric: took 53.81580645s to wait for k8s-apps to be running ...
	I0626 20:54:49.195466   46683 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:54:49.195518   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:54:49.219014   46683 system_svc.go:56] duration metric: took 23.534309ms WaitForService to wait for kubelet.
	I0626 20:54:49.219049   46683 kubeadm.go:581] duration metric: took 1m5.775176119s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:54:49.219089   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:54:49.223397   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:54:49.223426   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:54:49.223438   46683 node_conditions.go:105] duration metric: took 4.339435ms to run NodePressure ...
	I0626 20:54:49.223452   46683 start.go:228] waiting for startup goroutines ...
	I0626 20:54:49.223461   46683 start.go:233] waiting for cluster config update ...
	I0626 20:54:49.223472   46683 start.go:242] writing updated cluster config ...
	I0626 20:54:49.223798   46683 ssh_runner.go:195] Run: rm -f paused
	I0626 20:54:49.277613   46683 start.go:652] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0626 20:54:49.279501   46683 out.go:177] 
	W0626 20:54:49.280841   46683 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0626 20:54:49.282249   46683 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0626 20:54:49.283695   46683 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-490377" cluster and "default" namespace by default
	I0626 20:54:48.401602   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:50.900375   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:51.514071   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.013330   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:52.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.900946   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.013501   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:58.014748   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.901531   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:59.401822   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:00.016725   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:02.514316   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:01.902698   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:04.400011   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:06.402149   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:05.014536   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:07.514975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:08.900297   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.900463   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.013780   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:12.514823   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:13.399907   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.400044   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.014032   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.515161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.907245   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.400962   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.015074   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.514465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.403366   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.900247   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.514993   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.012592   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.013612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.400192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.401917   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.402240   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.015647   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.513844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.900187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.902063   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.514657   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:37.514888   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:38.400753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.902398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.014755   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:42.514599   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:43.401280   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:45.902265   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:44.521736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.016422   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.902334   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:50.400765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:49.515570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.014736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.900293   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.900572   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.514047   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.013346   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.013409   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.400170   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.401528   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.013946   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:03.014845   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.902597   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:04.401919   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:05.514639   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:08.016797   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:06.901493   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:09.400229   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:11.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:10.513478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:12.514938   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:13.403138   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.901738   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.013852   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:17.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:18.400812   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.401025   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.013522   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.015651   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.016747   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.401212   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.401675   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.515343   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:28.515706   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.902301   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:29.401779   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.012844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:33.013826   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.901622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.403688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.993256   47309 pod_ready.go:81] duration metric: took 4m0.000204736s waiting for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:34.993309   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:34.993324   47309 pod_ready.go:38] duration metric: took 4m11.355630262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:34.993352   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:34.993410   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:34.993484   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:35.038316   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.038342   47309 cri.go:89] found id: ""
	I0626 20:56:35.038352   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:35.038414   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.042851   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:35.042914   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:35.076892   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.076925   47309 cri.go:89] found id: ""
	I0626 20:56:35.076934   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:35.076990   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.081850   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:35.081933   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:35.119872   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.119896   47309 cri.go:89] found id: ""
	I0626 20:56:35.119904   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:35.119971   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.124661   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:35.124731   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:35.158899   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.158924   47309 cri.go:89] found id: ""
	I0626 20:56:35.158933   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:35.158991   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.163512   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:35.163587   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:35.195698   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.195721   47309 cri.go:89] found id: ""
	I0626 20:56:35.195729   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:35.195786   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.199883   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:35.199935   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:35.243909   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.243932   47309 cri.go:89] found id: ""
	I0626 20:56:35.243939   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:35.243992   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.248331   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:35.248388   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:35.287985   47309 cri.go:89] found id: ""
	I0626 20:56:35.288009   47309 logs.go:284] 0 containers: []
	W0626 20:56:35.288019   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:35.288026   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:35.288085   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:35.324050   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.324129   47309 cri.go:89] found id: ""
	I0626 20:56:35.324151   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:35.324219   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.328564   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:35.328588   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:35.369968   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:35.369997   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:35.391588   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:35.391615   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:35.542328   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:35.542356   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.579140   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:35.579172   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.635428   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:35.635463   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.674715   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:35.674750   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.732788   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:35.732837   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.774860   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:35.774901   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:35.881082   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:35.881118   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.929445   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:35.929478   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.968723   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:35.968754   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:35.015798   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.514548   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.606375   47605 pod_ready.go:81] duration metric: took 4m0.000950536s waiting for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:37.606403   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:37.606412   47605 pod_ready.go:38] duration metric: took 4m2.78027212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:37.606429   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:37.606459   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:37.606521   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:37.668350   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:37.668383   47605 cri.go:89] found id: ""
	I0626 20:56:37.668391   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:37.668453   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.675583   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:37.675669   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:37.710826   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:37.710852   47605 cri.go:89] found id: ""
	I0626 20:56:37.710860   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:37.710916   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.715610   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:37.715671   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:37.751709   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:37.751784   47605 cri.go:89] found id: ""
	I0626 20:56:37.751812   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:37.751877   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.757177   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:37.757241   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:37.790384   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:37.790413   47605 cri.go:89] found id: ""
	I0626 20:56:37.790420   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:37.790468   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.795294   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:37.795352   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:37.832125   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:37.832157   47605 cri.go:89] found id: ""
	I0626 20:56:37.832168   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:37.832239   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.836762   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:37.836816   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:37.877789   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:37.877817   47605 cri.go:89] found id: ""
	I0626 20:56:37.877827   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:37.877887   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.885276   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:37.885348   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:37.929701   47605 cri.go:89] found id: ""
	I0626 20:56:37.929731   47605 logs.go:284] 0 containers: []
	W0626 20:56:37.929745   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:37.929755   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:37.929815   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:37.970177   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:37.970201   47605 cri.go:89] found id: ""
	I0626 20:56:37.970211   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:37.970270   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.975002   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:37.975025   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:38.022831   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:38.022862   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:38.058414   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:38.058446   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:38.168689   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:38.168726   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:38.183930   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:38.183959   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:38.224623   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:38.224653   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:38.271164   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:38.271205   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:38.308365   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:38.308391   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:38.363321   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:38.363356   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:38.510275   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:38.510310   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:38.552512   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:38.552544   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:38.586122   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:38.586155   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:38.945144   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:38.962999   47309 api_server.go:72] duration metric: took 4m18.467522928s to wait for apiserver process to appear ...
	I0626 20:56:38.963026   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:38.963067   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:38.963129   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:39.002109   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.002133   47309 cri.go:89] found id: ""
	I0626 20:56:39.002141   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:39.002198   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.006799   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:39.006864   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:39.042531   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:39.042556   47309 cri.go:89] found id: ""
	I0626 20:56:39.042566   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:39.042621   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.047228   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:39.047301   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:39.080810   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.080842   47309 cri.go:89] found id: ""
	I0626 20:56:39.080850   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:39.080916   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.085173   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:39.085238   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:39.116857   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:39.116886   47309 cri.go:89] found id: ""
	I0626 20:56:39.116895   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:39.116946   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.121912   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:39.122007   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:39.166886   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.166912   47309 cri.go:89] found id: ""
	I0626 20:56:39.166920   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:39.166972   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.171344   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:39.171420   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:39.205333   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:39.205358   47309 cri.go:89] found id: ""
	I0626 20:56:39.205366   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:39.205445   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.211414   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:39.211491   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:39.249068   47309 cri.go:89] found id: ""
	I0626 20:56:39.249092   47309 logs.go:284] 0 containers: []
	W0626 20:56:39.249103   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:39.249110   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:39.249171   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:39.283295   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.283314   47309 cri.go:89] found id: ""
	I0626 20:56:39.283325   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:39.283372   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.287514   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:39.287537   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:39.420720   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:39.420752   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.479018   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:39.479052   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.512285   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:39.512313   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.549886   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:39.549922   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.590619   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:39.590647   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:40.076597   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:40.076642   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:40.092551   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:40.092581   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:40.135655   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:40.135699   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:40.184590   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:40.184628   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:40.238354   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:40.238393   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:40.283033   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:40.283075   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:41.567686   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:41.584431   47605 api_server.go:72] duration metric: took 4m9.528462616s to wait for apiserver process to appear ...
	I0626 20:56:41.584462   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:41.584492   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:41.584553   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:41.622027   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:41.622051   47605 cri.go:89] found id: ""
	I0626 20:56:41.622061   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:41.622119   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.626209   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:41.626271   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:41.658658   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:41.658680   47605 cri.go:89] found id: ""
	I0626 20:56:41.658689   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:41.658746   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.666357   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:41.666437   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:41.702344   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:41.702369   47605 cri.go:89] found id: ""
	I0626 20:56:41.702378   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:41.702443   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.706706   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:41.706775   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:41.743534   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:41.743554   47605 cri.go:89] found id: ""
	I0626 20:56:41.743561   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:41.743619   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.748338   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:41.748408   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:41.780299   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:41.780324   47605 cri.go:89] found id: ""
	I0626 20:56:41.780333   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:41.780392   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.785308   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:41.785395   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:41.819335   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:41.819361   47605 cri.go:89] found id: ""
	I0626 20:56:41.819370   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:41.819415   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.823767   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:41.823832   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:41.855049   47605 cri.go:89] found id: ""
	I0626 20:56:41.855079   47605 logs.go:284] 0 containers: []
	W0626 20:56:41.855088   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:41.855094   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:41.855147   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:41.886378   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:41.886400   47605 cri.go:89] found id: ""
	I0626 20:56:41.886408   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:41.886459   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.891748   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:41.891777   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:42.003933   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:42.003968   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:42.018182   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:42.018230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:42.145038   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:42.145074   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:42.181403   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:42.181438   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:42.224428   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:42.224467   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:42.260067   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:42.260097   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:42.312924   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:42.312972   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:42.347173   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:42.347203   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:42.920689   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:42.920725   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:42.970428   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:42.970456   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:43.021561   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.021587   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:42.886551   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:56:42.892462   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:56:42.894253   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:42.894277   47309 api_server.go:131] duration metric: took 3.931242905s to wait for apiserver health ...
	I0626 20:56:42.894286   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:42.894309   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:42.894364   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:42.931699   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:42.931728   47309 cri.go:89] found id: ""
	I0626 20:56:42.931736   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:42.931792   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.936873   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:42.936944   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:42.968701   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:42.968720   47309 cri.go:89] found id: ""
	I0626 20:56:42.968727   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:42.968778   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.974309   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:42.974381   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:43.010388   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:43.010416   47309 cri.go:89] found id: ""
	I0626 20:56:43.010425   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:43.010482   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.015524   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:43.015582   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:43.049074   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.049103   47309 cri.go:89] found id: ""
	I0626 20:56:43.049112   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:43.049173   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.053750   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:43.053814   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:43.096699   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:43.096727   47309 cri.go:89] found id: ""
	I0626 20:56:43.096734   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:43.096776   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.101210   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:43.101264   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:43.133316   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:43.133344   47309 cri.go:89] found id: ""
	I0626 20:56:43.133354   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:43.133420   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.138226   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:43.138289   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:43.169863   47309 cri.go:89] found id: ""
	I0626 20:56:43.169896   47309 logs.go:284] 0 containers: []
	W0626 20:56:43.169903   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:43.169908   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:43.169962   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:43.201859   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.201884   47309 cri.go:89] found id: ""
	I0626 20:56:43.201892   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:43.201942   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.207043   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:43.207072   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.264723   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:43.264755   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.301988   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.302016   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:43.344103   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:43.344132   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:43.357414   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:43.357445   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:43.486425   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:43.486453   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:43.529205   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:43.529239   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:43.575311   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:43.575344   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:44.074749   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:44.074790   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:44.184946   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:44.184987   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:44.221993   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:44.222028   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:44.263095   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:44.263127   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:46.817987   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:46.818014   47309 system_pods.go:61] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.818019   47309 system_pods.go:61] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.818023   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.818027   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.818031   47309 system_pods.go:61] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.818035   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.818041   47309 system_pods.go:61] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.818047   47309 system_pods.go:61] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.818052   47309 system_pods.go:74] duration metric: took 3.923762125s to wait for pod list to return data ...
	I0626 20:56:46.818061   47309 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:46.821789   47309 default_sa.go:45] found service account: "default"
	I0626 20:56:46.821811   47309 default_sa.go:55] duration metric: took 3.746079ms for default service account to be created ...
	I0626 20:56:46.821818   47309 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:46.830080   47309 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:46.830117   47309 system_pods.go:89] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.830127   47309 system_pods.go:89] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.830134   47309 system_pods.go:89] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.830141   47309 system_pods.go:89] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.830147   47309 system_pods.go:89] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.830153   47309 system_pods.go:89] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.830165   47309 system_pods.go:89] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.830178   47309 system_pods.go:89] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.830186   47309 system_pods.go:126] duration metric: took 8.363064ms to wait for k8s-apps to be running ...
	I0626 20:56:46.830198   47309 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:46.830250   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:46.851429   47309 system_svc.go:56] duration metric: took 21.223321ms WaitForService to wait for kubelet.
	I0626 20:56:46.851456   47309 kubeadm.go:581] duration metric: took 4m26.355992846s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:46.851482   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:46.856152   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:46.856177   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:46.856187   47309 node_conditions.go:105] duration metric: took 4.700595ms to run NodePressure ...
	I0626 20:56:46.856197   47309 start.go:228] waiting for startup goroutines ...
	I0626 20:56:46.856203   47309 start.go:233] waiting for cluster config update ...
	I0626 20:56:46.856212   47309 start.go:242] writing updated cluster config ...
	I0626 20:56:46.856472   47309 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:46.911414   47309 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:46.913280   47309 out.go:177] * Done! kubectl is now configured to use "no-preload-934450" cluster and "default" namespace by default
	I0626 20:56:45.561459   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:56:45.567555   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:56:45.568704   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:45.568720   47605 api_server.go:131] duration metric: took 3.984252941s to wait for apiserver health ...
	I0626 20:56:45.568728   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:45.568745   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:45.568789   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:45.608235   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:45.608261   47605 cri.go:89] found id: ""
	I0626 20:56:45.608270   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:45.608335   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.612705   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:45.612774   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:45.649330   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.649353   47605 cri.go:89] found id: ""
	I0626 20:56:45.649362   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:45.649440   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.655104   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:45.655178   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:45.699690   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.699711   47605 cri.go:89] found id: ""
	I0626 20:56:45.699722   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:45.699767   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.704455   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:45.704515   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:45.743181   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:45.743209   47605 cri.go:89] found id: ""
	I0626 20:56:45.743218   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:45.743283   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.748030   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:45.748098   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:45.787325   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:45.787352   47605 cri.go:89] found id: ""
	I0626 20:56:45.787360   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:45.787406   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.792119   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:45.792191   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:45.833192   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:45.833215   47605 cri.go:89] found id: ""
	I0626 20:56:45.833222   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:45.833279   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.838399   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:45.838464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:45.878372   47605 cri.go:89] found id: ""
	I0626 20:56:45.878403   47605 logs.go:284] 0 containers: []
	W0626 20:56:45.878410   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:45.878415   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:45.878464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:45.917051   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:45.917074   47605 cri.go:89] found id: ""
	I0626 20:56:45.917081   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:45.917125   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.921484   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:45.921508   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.962659   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:45.962699   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.993644   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:45.993674   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:46.055087   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:46.055130   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:46.574535   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:46.574581   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:46.617139   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:46.617174   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:46.729727   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:46.729768   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:46.860871   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:46.860908   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:46.922618   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:46.922657   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:46.975973   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:46.976000   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:47.017458   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:47.017488   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:47.058540   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:47.058567   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:49.582112   47605 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:49.582139   47605 system_pods.go:61] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.582145   47605 system_pods.go:61] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.582149   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.582153   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.582157   47605 system_pods.go:61] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.582163   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.582169   47605 system_pods.go:61] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.582175   47605 system_pods.go:61] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.582180   47605 system_pods.go:74] duration metric: took 4.013448182s to wait for pod list to return data ...
	I0626 20:56:49.582187   47605 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:49.588793   47605 default_sa.go:45] found service account: "default"
	I0626 20:56:49.588827   47605 default_sa.go:55] duration metric: took 6.634132ms for default service account to be created ...
	I0626 20:56:49.588836   47605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:49.596519   47605 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:49.596549   47605 system_pods.go:89] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.596555   47605 system_pods.go:89] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.596562   47605 system_pods.go:89] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.596570   47605 system_pods.go:89] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.596577   47605 system_pods.go:89] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.596585   47605 system_pods.go:89] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.596600   47605 system_pods.go:89] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.596612   47605 system_pods.go:89] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.596622   47605 system_pods.go:126] duration metric: took 7.781697ms to wait for k8s-apps to be running ...
	I0626 20:56:49.596633   47605 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:49.596684   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:49.613188   47605 system_svc.go:56] duration metric: took 16.545322ms WaitForService to wait for kubelet.
	I0626 20:56:49.613212   47605 kubeadm.go:581] duration metric: took 4m17.557252465s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:49.613231   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:49.616820   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:49.616845   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:49.616854   47605 node_conditions.go:105] duration metric: took 3.619443ms to run NodePressure ...
	I0626 20:56:49.616864   47605 start.go:228] waiting for startup goroutines ...
	I0626 20:56:49.616870   47605 start.go:233] waiting for cluster config update ...
	I0626 20:56:49.616878   47605 start.go:242] writing updated cluster config ...
	I0626 20:56:49.617126   47605 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:49.665468   47605 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:49.667447   47605 out.go:177] * Done! kubectl is now configured to use "embed-certs-299839" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:47:27 UTC, ends at Mon 2023-06-26 21:03:51 UTC. --
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.925121740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c3eebb2-6cc3-4e13-8957-22504501cc27 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.925277221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c3eebb2-6cc3-4e13-8957-22504501cc27 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.950655273Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=9d2aac94-23a8-4ff5-97bc-10f35923aaca name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.950863880Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a2d9a694b89586f9ddce85bc77cf5476ac8dd40d52913592cc0854367d16db09,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-bvbnj,Uid:a51799c8-5cb6-42eb-85f0-508d0303445f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812825452626319,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-bvbnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a51799c8-5cb6-42eb-85f0-508d0303445f,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:53:45.115590536Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-k6lww,Uid:b447152e-e5ad-4a16-a2fa-e1283
dd98e1b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812824599257378,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:53:44.259697165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c17bf508-5125-4aa3-b48f-3ec6700ef03b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812824545658497,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3
ec6700ef03b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-06-26T20:53:44.201808679Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&PodSandboxMetadata{Name:kube-proxy-m7hz7,Uid:265fb314-5fe1-4cc2-bc0
3-79ec432d1a46,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812823818497734,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb314-5fe1-4cc2-bc03-79ec432d1a46,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:53:42.575737774Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-490377,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812796510246684,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-06-26T20:53:15.981529283Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-490377,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812796497307926,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-06-26T20:53:15.981530693Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-490377,Uid:e703b2994e5bd1a9d98777f091e32ff6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812796476316586,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e703b2994e5bd1a9d98777f091e32ff6,kubernetes.io/config.seen: 2023-06-26T20:53:15.981527711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-490377,Uid:2d16f4e4c3d338ac15a9bae60bef2daa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812796450175827,Labels:map[string]string{componen
t: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2d16f4e4c3d338ac15a9bae60bef2daa,kubernetes.io/config.seen: 2023-06-26T20:53:15.981522876Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9d2aac94-23a8-4ff5-97bc-10f35923aaca name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.951950118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef3cd85d-080f-481d-a61d-009f051dac22 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.952000484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef3cd85d-080f-481d-a61d-009f051dac22 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.952149214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef3cd85d-080f-481d-a61d-009f051dac22 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.964718630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=feefff00-77c7-49f0-9587-697d6eccd997 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.964768734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=feefff00-77c7-49f0-9587-697d6eccd997 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:50 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:50.964990249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=feefff00-77c7-49f0-9587-697d6eccd997 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.000831751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb52e6a8-918b-4616-91d6-95c7baa3ee31 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.000971010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb52e6a8-918b-4616-91d6-95c7baa3ee31 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.001141827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb52e6a8-918b-4616-91d6-95c7baa3ee31 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.036378417Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b2a8c41b-d1ad-43c9-bed7-e183f93a2990 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.036443155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b2a8c41b-d1ad-43c9-bed7-e183f93a2990 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.036605018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b2a8c41b-d1ad-43c9-bed7-e183f93a2990 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.073851553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed258fe5-a932-4cd4-b6d6-1e3f0d37b2e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.073972235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed258fe5-a932-4cd4-b6d6-1e3f0d37b2e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.074150406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed258fe5-a932-4cd4-b6d6-1e3f0d37b2e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.111009434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b10c0aa9-659a-4eb7-805f-67228ea66cae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.111075237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b10c0aa9-659a-4eb7-805f-67228ea66cae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.111256460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b10c0aa9-659a-4eb7-805f-67228ea66cae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.143557623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c1889565-12e6-4362-9f8a-1d6711302a4e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.143622500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c1889565-12e6-4362-9f8a-1d6711302a4e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:03:51 old-k8s-version-490377 crio[718]: time="2023-06-26 21:03:51.145081754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c1889565-12e6-4362-9f8a-1d6711302a4e name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	e4c63b2286876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   80814ed400554
	9211a896843b4       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   fc0f3f9259236
	974041d011ecf       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   f74c1c90d9a54
	909b122decd75       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   b87c3356304de
	eee0db517063a       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   92ce758b11fda
	d5bf95816703a       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   83db5f78d9adb
	59fe9451027f9       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   7c09e40e4201d
	
	* 
	* ==> coredns [9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796] <==
	* .:53
	2023-06-26T20:53:45.144Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-06-26T20:53:45.144Z [INFO] CoreDNS-1.6.2
	2023-06-26T20:53:45.144Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-06-26T20:53:45.159Z [INFO] 127.0.0.1:40572 - 13622 "HINFO IN 2354216843956826877.8527488041721077620. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014357461s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-490377
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-490377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=old-k8s-version-490377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:53:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:03:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:03:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:03:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:03:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.111
	  Hostname:    old-k8s-version-490377
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ea232fb4ab5748478f4675b503f2e984
	 System UUID:                ea232fb4-ab57-4847-8f46-75b503f2e984
	 Boot ID:                    03b59918-dcfa-4a1b-ad64-21a28bdb7886
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-k6lww                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-490377                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                kube-apiserver-old-k8s-version-490377             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                kube-controller-manager-old-k8s-version-490377    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m31s
	  kube-system                kube-proxy-m7hz7                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-490377             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                metrics-server-74d5856cc6-bvbnj                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 10m                kubelet, old-k8s-version-490377     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet, old-k8s-version-490377     Node old-k8s-version-490377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-490377     Node old-k8s-version-490377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-490377     Node old-k8s-version-490377 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet, old-k8s-version-490377     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-490377  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun26 20:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081220] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.643149] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.437214] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140249] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.487915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.152043] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.116286] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.151754] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.112634] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.236084] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +19.298071] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
	[  +0.419702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jun26 20:48] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.781800] kauditd_printk_skb: 2 callbacks suppressed
	[Jun26 20:53] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.663835] systemd-fstab-generator[3217]: Ignoring "noauto" for root device
	[ +40.461505] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598] <==
	* 2023-06-26 20:53:18.529408 I | raft: d9925a5c077e2b1a became follower at term 0
	2023-06-26 20:53:18.529432 I | raft: newRaft d9925a5c077e2b1a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-06-26 20:53:18.529447 I | raft: d9925a5c077e2b1a became follower at term 1
	2023-06-26 20:53:18.541213 W | auth: simple token is not cryptographically signed
	2023-06-26 20:53:18.545061 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-06-26 20:53:18.546217 I | etcdserver: d9925a5c077e2b1a as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-06-26 20:53:18.546534 I | etcdserver/membership: added member d9925a5c077e2b1a [https://192.168.72.111:2380] to cluster 5b15f244ed8f8770
	2023-06-26 20:53:18.548437 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-26 20:53:18.548840 I | embed: listening for metrics on http://192.168.72.111:2381
	2023-06-26 20:53:18.549174 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-26 20:53:19.029810 I | raft: d9925a5c077e2b1a is starting a new election at term 1
	2023-06-26 20:53:19.030004 I | raft: d9925a5c077e2b1a became candidate at term 2
	2023-06-26 20:53:19.030037 I | raft: d9925a5c077e2b1a received MsgVoteResp from d9925a5c077e2b1a at term 2
	2023-06-26 20:53:19.030072 I | raft: d9925a5c077e2b1a became leader at term 2
	2023-06-26 20:53:19.030089 I | raft: raft.node: d9925a5c077e2b1a elected leader d9925a5c077e2b1a at term 2
	2023-06-26 20:53:19.030324 I | etcdserver: setting up the initial cluster version to 3.3
	2023-06-26 20:53:19.031792 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-06-26 20:53:19.031828 I | etcdserver/api: enabled capabilities for version 3.3
	2023-06-26 20:53:19.031853 I | etcdserver: published {Name:old-k8s-version-490377 ClientURLs:[https://192.168.72.111:2379]} to cluster 5b15f244ed8f8770
	2023-06-26 20:53:19.031859 I | embed: ready to serve client requests
	2023-06-26 20:53:19.032945 I | embed: ready to serve client requests
	2023-06-26 20:53:19.033201 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-26 20:53:19.034184 I | embed: serving client requests on 192.168.72.111:2379
	2023-06-26 21:03:19.072777 I | mvcc: store.index: compact 680
	2023-06-26 21:03:19.075082 I | mvcc: finished scheduled compaction at 680 (took 1.47525ms)
	
	* 
	* ==> kernel <==
	*  21:03:51 up 16 min,  0 users,  load average: 0.15, 0.18, 0.20
	Linux old-k8s-version-490377 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365] <==
	* I0626 20:56:45.712798       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 20:56:45.713299       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 20:56:45.713534       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 20:56:45.713577       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 20:58:23.204756       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 20:58:23.205148       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 20:58:23.205261       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 20:58:23.205322       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 20:59:23.205643       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 20:59:23.205799       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 20:59:23.205837       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 20:59:23.205848       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:01:23.206477       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 21:01:23.206594       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 21:01:23.206652       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:01:23.206660       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:03:23.208071       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 21:03:23.208238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 21:03:23.208307       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:03:23.208315       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da] <==
	* W0626 20:57:27.781529       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 20:57:44.796361       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 20:57:59.783975       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 20:58:15.048843       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 20:58:31.786362       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 20:58:45.300741       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 20:59:03.788266       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 20:59:15.552830       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 20:59:35.790113       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 20:59:45.804986       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:00:07.792452       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:00:16.056614       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:00:39.794580       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:00:46.309259       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:01:11.796340       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:01:16.562748       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:01:43.799079       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:01:46.815243       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:02:15.801644       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:02:17.067359       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0626 21:02:47.319986       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:02:47.804768       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:03:17.572220       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:03:19.806764       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:03:47.825369       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41] <==
	* W0626 20:53:45.241750       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0626 20:53:45.262126       1 node.go:135] Successfully retrieved node IP: 192.168.72.111
	I0626 20:53:45.262183       1 server_others.go:149] Using iptables Proxier.
	I0626 20:53:45.263235       1 server.go:529] Version: v1.16.0
	I0626 20:53:45.264926       1 config.go:131] Starting endpoints config controller
	I0626 20:53:45.264975       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0626 20:53:45.265290       1 config.go:313] Starting service config controller
	I0626 20:53:45.265333       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0626 20:53:45.365289       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0626 20:53:45.365801       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae] <==
	* I0626 20:53:22.240630       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0626 20:53:22.290505       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:53:22.290617       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:53:22.290660       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:53:22.290696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:53:22.290720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:53:22.290747       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:53:22.291484       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:53:22.291517       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:53:22.291543       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:53:22.292345       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:22.295187       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:23.292183       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:53:23.293339       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:53:23.295005       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:53:23.301821       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:53:23.302046       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:53:23.303014       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:53:23.303162       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:53:23.304587       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:23.304658       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:53:23.305283       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:53:23.306174       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:42.346521       1 factory.go:585] pod is already present in the activeQ
	E0626 20:53:42.392230       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:47:27 UTC, ends at Mon 2023-06-26 21:03:51 UTC. --
	Jun 26 20:59:22 old-k8s-version-490377 kubelet[3235]: E0626 20:59:22.994841    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 20:59:36 old-k8s-version-490377 kubelet[3235]: E0626 20:59:36.056589    3235 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 20:59:36 old-k8s-version-490377 kubelet[3235]: E0626 20:59:36.056702    3235 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 20:59:36 old-k8s-version-490377 kubelet[3235]: E0626 20:59:36.056764    3235 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 20:59:36 old-k8s-version-490377 kubelet[3235]: E0626 20:59:36.056798    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jun 26 20:59:49 old-k8s-version-490377 kubelet[3235]: E0626 20:59:49.995871    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:00:02 old-k8s-version-490377 kubelet[3235]: E0626 21:00:02.995109    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:00:15 old-k8s-version-490377 kubelet[3235]: E0626 21:00:15.995330    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:00:28 old-k8s-version-490377 kubelet[3235]: E0626 21:00:28.994947    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:00:39 old-k8s-version-490377 kubelet[3235]: E0626 21:00:39.995522    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:00:51 old-k8s-version-490377 kubelet[3235]: E0626 21:00:51.994660    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:01:06 old-k8s-version-490377 kubelet[3235]: E0626 21:01:06.994936    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:01:21 old-k8s-version-490377 kubelet[3235]: E0626 21:01:21.994821    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:01:35 old-k8s-version-490377 kubelet[3235]: E0626 21:01:35.995011    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:01:48 old-k8s-version-490377 kubelet[3235]: E0626 21:01:48.994696    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:01 old-k8s-version-490377 kubelet[3235]: E0626 21:02:01.995625    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:16 old-k8s-version-490377 kubelet[3235]: E0626 21:02:16.994654    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:28 old-k8s-version-490377 kubelet[3235]: E0626 21:02:28.995010    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:43 old-k8s-version-490377 kubelet[3235]: E0626 21:02:43.994984    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:54 old-k8s-version-490377 kubelet[3235]: E0626 21:02:54.994717    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:08 old-k8s-version-490377 kubelet[3235]: E0626 21:03:08.994789    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:16 old-k8s-version-490377 kubelet[3235]: E0626 21:03:16.073689    3235 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jun 26 21:03:19 old-k8s-version-490377 kubelet[3235]: E0626 21:03:19.995285    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:32 old-k8s-version-490377 kubelet[3235]: E0626 21:03:32.994604    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:44 old-k8s-version-490377 kubelet[3235]: E0626 21:03:44.994867    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f] <==
	* I0626 20:53:45.674862       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:53:45.698296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:53:45.698366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:53:45.720362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:53:45.724520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-490377_56252732-3b71-44ba-b8a6-626850ffffd7!
	I0626 20:53:45.725580       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1489b44b-117b-4ea6-bf06-8c5fb249f56c", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-490377_56252732-3b71-44ba-b8a6-626850ffffd7 became leader
	I0626 20:53:45.826824       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-490377_56252732-3b71-44ba-b8a6-626850ffffd7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490377 -n old-k8s-version-490377
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-490377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-bvbnj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-490377 describe pod metrics-server-74d5856cc6-bvbnj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-490377 describe pod metrics-server-74d5856cc6-bvbnj: exit status 1 (68.015022ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-bvbnj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-490377 describe pod metrics-server-74d5856cc6-bvbnj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0626 20:56:48.326908   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-934450 -n no-preload-934450
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:05:47.479571197 +0000 UTC m=+5411.979599033
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-934450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-934450 logs -n 25: (1.688981002s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490377        | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-123924                              | stopped-upgrade-123924       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603225 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | disable-driver-mounts-603225                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934450             | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC | 26 Jun 23 20:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490377             | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 20:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 20:44:35.222921   47779 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:44:35.223059   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223070   47779 out.go:309] Setting ErrFile to fd 2...
	I0626 20:44:35.223074   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223199   47779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:44:35.223797   47779 out.go:303] Setting JSON to false
	I0626 20:44:35.224674   47779 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5222,"bootTime":1687807053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:44:35.224734   47779 start.go:137] virtualization: kvm guest
	I0626 20:44:35.226901   47779 out.go:177] * [default-k8s-diff-port-473235] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:44:35.228842   47779 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:44:35.228804   47779 notify.go:220] Checking for updates...
	I0626 20:44:35.230224   47779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:44:35.231788   47779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:44:35.233239   47779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:44:35.234554   47779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:44:35.236823   47779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:44:35.238432   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:44:35.238825   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.238878   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.253669   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0626 20:44:35.254014   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.254589   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.254610   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.254907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.255090   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.255322   47779 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:44:35.255597   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.255627   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.269620   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0626 20:44:35.270027   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.270571   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.270599   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.270857   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.271037   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.302607   47779 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:44:35.303877   47779 start.go:297] selected driver: kvm2
	I0626 20:44:35.303889   47779 start.go:954] validating driver "kvm2" against &{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.303997   47779 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:44:35.304600   47779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.304681   47779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:44:35.319036   47779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:44:35.319459   47779 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 20:44:35.319499   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:44:35.319516   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:44:35.319532   47779 start_flags.go:319] config:
	{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-47323
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.319725   47779 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.321690   47779 out.go:177] * Starting control plane node default-k8s-diff-port-473235 in cluster default-k8s-diff-port-473235
	I0626 20:44:33.713644   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:35.323076   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:44:35.323119   47779 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 20:44:35.323145   47779 cache.go:57] Caching tarball of preloaded images
	I0626 20:44:35.323245   47779 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:44:35.323260   47779 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:44:35.323385   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:44:35.323607   47779 start.go:365] acquiring machines lock for default-k8s-diff-port-473235: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:44:39.793629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:42.865602   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:48.945651   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:52.017646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:58.097650   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:01.169629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:07.249647   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:10.321634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:16.401660   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:19.473641   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:25.553634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:28.625721   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:34.705617   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:37.777753   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:43.857659   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:46.929661   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:53.009637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:56.081646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:02.161637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:05.233633   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:11.313640   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:14.317303   47309 start.go:369] acquired machines lock for "no-preload-934450" in 2m47.59820508s
	I0626 20:46:14.317355   47309 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:14.317388   47309 fix.go:54] fixHost starting: 
	I0626 20:46:14.317703   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:14.317733   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:14.331991   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0626 20:46:14.332358   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:14.332862   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:46:14.332888   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:14.333180   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:14.333368   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:14.333556   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:46:14.334930   47309 fix.go:102] recreateIfNeeded on no-preload-934450: state=Stopped err=<nil>
	I0626 20:46:14.334954   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	W0626 20:46:14.335122   47309 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:14.336692   47309 out.go:177] * Restarting existing kvm2 VM for "no-preload-934450" ...
	I0626 20:46:14.338056   47309 main.go:141] libmachine: (no-preload-934450) Calling .Start
	I0626 20:46:14.338201   47309 main.go:141] libmachine: (no-preload-934450) Ensuring networks are active...
	I0626 20:46:14.339255   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network default is active
	I0626 20:46:14.339575   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network mk-no-preload-934450 is active
	I0626 20:46:14.339980   47309 main.go:141] libmachine: (no-preload-934450) Getting domain xml...
	I0626 20:46:14.340638   47309 main.go:141] libmachine: (no-preload-934450) Creating domain...
	I0626 20:46:15.550725   47309 main.go:141] libmachine: (no-preload-934450) Waiting to get IP...
	I0626 20:46:15.551641   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.552053   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.552125   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.552057   48070 retry.go:31] will retry after 285.629833ms: waiting for machine to come up
	I0626 20:46:15.839584   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.839950   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.839976   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.839920   48070 retry.go:31] will retry after 318.234269ms: waiting for machine to come up
	I0626 20:46:16.159361   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.159793   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.159823   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.159752   48070 retry.go:31] will retry after 486.280811ms: waiting for machine to come up
	I0626 20:46:14.315357   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:14.315401   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:46:14.317194   46683 machine.go:91] provisioned docker machine in 4m37.381545898s
	I0626 20:46:14.317230   46683 fix.go:56] fixHost completed within 4m37.403983922s
	I0626 20:46:14.317236   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 4m37.404002624s
	W0626 20:46:14.317252   46683 start.go:672] error starting host: provision: host is not running
	W0626 20:46:14.317326   46683 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0626 20:46:14.317333   46683 start.go:687] Will try again in 5 seconds ...
	I0626 20:46:16.647364   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.647777   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.647803   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.647721   48070 retry.go:31] will retry after 396.658606ms: waiting for machine to come up
	I0626 20:46:17.046604   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.047131   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.047156   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.047033   48070 retry.go:31] will retry after 741.382401ms: waiting for machine to come up
	I0626 20:46:17.789616   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.790035   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.790068   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.790014   48070 retry.go:31] will retry after 636.769895ms: waiting for machine to come up
	I0626 20:46:18.427899   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:18.428300   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:18.428326   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:18.428272   48070 retry.go:31] will retry after 869.736092ms: waiting for machine to come up
	I0626 20:46:19.299429   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:19.299742   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:19.299765   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:19.299717   48070 retry.go:31] will retry after 1.261709663s: waiting for machine to come up
	I0626 20:46:20.563421   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:20.563778   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:20.563807   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:20.563751   48070 retry.go:31] will retry after 1.280588584s: waiting for machine to come up
	I0626 20:46:19.318965   46683 start.go:365] acquiring machines lock for old-k8s-version-490377: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:46:21.846094   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:21.846530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:21.846557   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:21.846475   48070 retry.go:31] will retry after 1.542478163s: waiting for machine to come up
	I0626 20:46:23.391088   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:23.391530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:23.391559   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:23.391474   48070 retry.go:31] will retry after 2.115450652s: waiting for machine to come up
	I0626 20:46:25.508447   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:25.508882   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:25.508915   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:25.508826   48070 retry.go:31] will retry after 3.403199971s: waiting for machine to come up
	I0626 20:46:28.916347   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:28.916756   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:28.916782   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:28.916706   48070 retry.go:31] will retry after 3.011345508s: waiting for machine to come up
	I0626 20:46:33.094365   47605 start.go:369] acquired machines lock for "embed-certs-299839" in 2m23.878841424s
	I0626 20:46:33.094419   47605 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:33.094440   47605 fix.go:54] fixHost starting: 
	I0626 20:46:33.094827   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:33.094856   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:33.114045   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0626 20:46:33.114400   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:33.114927   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:46:33.114949   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:33.115244   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:33.115434   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:33.115573   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:46:33.116751   47605 fix.go:102] recreateIfNeeded on embed-certs-299839: state=Stopped err=<nil>
	I0626 20:46:33.116783   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	W0626 20:46:33.116944   47605 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:33.119904   47605 out.go:177] * Restarting existing kvm2 VM for "embed-certs-299839" ...
	I0626 20:46:33.121277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Start
	I0626 20:46:33.121442   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring networks are active...
	I0626 20:46:33.122062   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network default is active
	I0626 20:46:33.122397   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network mk-embed-certs-299839 is active
	I0626 20:46:33.122783   47605 main.go:141] libmachine: (embed-certs-299839) Getting domain xml...
	I0626 20:46:33.123400   47605 main.go:141] libmachine: (embed-certs-299839) Creating domain...
	I0626 20:46:31.930997   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931492   47309 main.go:141] libmachine: (no-preload-934450) Found IP for machine: 192.168.50.38
	I0626 20:46:31.931507   47309 main.go:141] libmachine: (no-preload-934450) Reserving static IP address...
	I0626 20:46:31.931524   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has current primary IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931877   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.931901   47309 main.go:141] libmachine: (no-preload-934450) DBG | skip adding static IP to network mk-no-preload-934450 - found existing host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"}
	I0626 20:46:31.931916   47309 main.go:141] libmachine: (no-preload-934450) Reserved static IP address: 192.168.50.38
	I0626 20:46:31.931928   47309 main.go:141] libmachine: (no-preload-934450) DBG | Getting to WaitForSSH function...
	I0626 20:46:31.931939   47309 main.go:141] libmachine: (no-preload-934450) Waiting for SSH to be available...
	I0626 20:46:31.934393   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934786   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.934814   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934954   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH client type: external
	I0626 20:46:31.934971   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa (-rw-------)
	I0626 20:46:31.935060   47309 main.go:141] libmachine: (no-preload-934450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:31.935091   47309 main.go:141] libmachine: (no-preload-934450) DBG | About to run SSH command:
	I0626 20:46:31.935112   47309 main.go:141] libmachine: (no-preload-934450) DBG | exit 0
	I0626 20:46:32.021036   47309 main.go:141] libmachine: (no-preload-934450) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:32.021357   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetConfigRaw
	I0626 20:46:32.022056   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.024943   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025390   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.025426   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025663   47309 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/config.json ...
	I0626 20:46:32.025851   47309 machine.go:88] provisioning docker machine ...
	I0626 20:46:32.025868   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.026092   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026257   47309 buildroot.go:166] provisioning hostname "no-preload-934450"
	I0626 20:46:32.026280   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026450   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.028213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028583   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.028618   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028699   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.028869   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029019   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029154   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.029415   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.029867   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.029887   47309 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934450 && echo "no-preload-934450" | sudo tee /etc/hostname
	I0626 20:46:32.150597   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934450
	
	I0626 20:46:32.150629   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.153096   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153441   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.153486   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153576   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.153773   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.153984   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.154125   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.154288   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.154697   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.154723   47309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:32.270792   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:32.270827   47309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:32.270890   47309 buildroot.go:174] setting up certificates
	I0626 20:46:32.270902   47309 provision.go:83] configureAuth start
	I0626 20:46:32.270922   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.271206   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.273824   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274189   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.274213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274310   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.276495   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.276896   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.276927   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.277062   47309 provision.go:138] copyHostCerts
	I0626 20:46:32.277118   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:32.277126   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:32.277188   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:32.277271   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:32.277278   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:32.277300   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:32.277351   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:32.277357   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:32.277393   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:32.277450   47309 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.no-preload-934450 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube no-preload-934450]
	I0626 20:46:32.417361   47309 provision.go:172] copyRemoteCerts
	I0626 20:46:32.417430   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:32.417452   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.419946   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420300   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.420331   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420501   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.420703   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.420864   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.421017   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.501807   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:46:32.524284   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:32.546766   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0626 20:46:32.569677   47309 provision.go:86] duration metric: configureAuth took 298.742863ms
	I0626 20:46:32.569711   47309 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:32.569925   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:32.570026   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.572516   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.572864   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.572901   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.573011   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.573178   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573350   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573492   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.573646   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.574084   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.574102   47309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:32.859482   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:32.859509   47309 machine.go:91] provisioned docker machine in 833.647496ms
	I0626 20:46:32.859519   47309 start.go:300] post-start starting for "no-preload-934450" (driver="kvm2")
	I0626 20:46:32.859527   47309 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:32.859543   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.859892   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:32.859942   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.862731   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863099   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.863131   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863250   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.863434   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.863570   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.863698   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.946748   47309 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:32.951257   47309 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:32.951278   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:32.951351   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:32.951436   47309 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:32.951516   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:32.959676   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:32.982687   47309 start.go:303] post-start completed in 123.154915ms
	I0626 20:46:32.982714   47309 fix.go:56] fixHost completed within 18.665325334s
	I0626 20:46:32.982763   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.985318   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985693   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.985725   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985868   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.986072   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986226   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986388   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.986547   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.986951   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.986968   47309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:33.094211   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812393.043726278
	
	I0626 20:46:33.094239   47309 fix.go:206] guest clock: 1687812393.043726278
	I0626 20:46:33.094248   47309 fix.go:219] Guest: 2023-06-26 20:46:33.043726278 +0000 UTC Remote: 2023-06-26 20:46:32.98271893 +0000 UTC m=+186.399054274 (delta=61.007348ms)
	I0626 20:46:33.094272   47309 fix.go:190] guest clock delta is within tolerance: 61.007348ms
	I0626 20:46:33.094277   47309 start.go:83] releasing machines lock for "no-preload-934450", held for 18.776943332s
	I0626 20:46:33.094309   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.094577   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:33.097365   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097744   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.097775   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097979   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098382   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098586   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098661   47309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:33.098712   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.098797   47309 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:33.098816   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.101252   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101554   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101580   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101599   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101719   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.101873   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.101951   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101981   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.102007   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102160   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.102182   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.102316   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.102443   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102551   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.210044   47309 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:33.215912   47309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:33.359955   47309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:33.366146   47309 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:33.366217   47309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:33.380504   47309 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:33.380526   47309 start.go:466] detecting cgroup driver to use...
	I0626 20:46:33.380579   47309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:33.393306   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:33.404983   47309 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:33.405038   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:33.418216   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:33.432337   47309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:33.531250   47309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:33.645556   47309 docker.go:212] disabling docker service ...
	I0626 20:46:33.645633   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:33.659515   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:33.671856   47309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:33.774921   47309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:33.883215   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:33.898847   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:33.917506   47309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:33.917580   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.928683   47309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:33.928743   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.939242   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.949833   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.960544   47309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:33.970988   47309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:33.979977   47309 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:33.980018   47309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:33.992692   47309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:34.001898   47309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:34.099514   47309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:34.265988   47309 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:34.266060   47309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:34.273678   47309 start.go:534] Will wait 60s for crictl version
	I0626 20:46:34.273739   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.277401   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:34.312548   47309 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:34.312630   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.360715   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.413882   47309 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:34.415181   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:34.417841   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418166   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:34.418189   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418410   47309 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:34.422651   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:34.434668   47309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:34.434717   47309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:34.465589   47309 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:34.465614   47309 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:46:34.465690   47309 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.465708   47309 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.465738   47309 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.465754   47309 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.465788   47309 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.465828   47309 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.465693   47309 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.465936   47309 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.467120   47309 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.467219   47309 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.467247   47309 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.467295   47309 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.467306   47309 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.467250   47309 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.636874   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.655059   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.683826   47309 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0626 20:46:34.683861   47309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.683928   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.702952   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.703028   47309 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0626 20:46:34.703071   47309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.703103   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.741790   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.741897   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0626 20:46:34.742006   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.746779   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.749151   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0626 20:46:34.759216   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.760925   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.763727   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.802768   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0626 20:46:34.802855   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0626 20:46:34.802879   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802936   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802879   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:34.875629   47309 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0626 20:46:34.875683   47309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.875741   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976009   47309 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0626 20:46:34.976048   47309 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.976082   47309 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0626 20:46:34.976100   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976116   47309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.976117   47309 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0626 20:46:34.976143   47309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.976156   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976179   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:35.433285   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.379704   47605 main.go:141] libmachine: (embed-certs-299839) Waiting to get IP...
	I0626 20:46:34.380770   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.381274   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.381362   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.381264   48187 retry.go:31] will retry after 291.849421ms: waiting for machine to come up
	I0626 20:46:34.674760   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.675247   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.675276   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.675192   48187 retry.go:31] will retry after 276.057593ms: waiting for machine to come up
	I0626 20:46:34.952573   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.953045   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.953077   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.953003   48187 retry.go:31] will retry after 360.478931ms: waiting for machine to come up
	I0626 20:46:35.315537   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.316036   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.316057   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.315988   48187 retry.go:31] will retry after 582.62072ms: waiting for machine to come up
	I0626 20:46:35.899816   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.900171   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.900232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.900154   48187 retry.go:31] will retry after 502.843212ms: waiting for machine to come up
	I0626 20:46:36.404792   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:36.405188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:36.405222   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:36.405134   48187 retry.go:31] will retry after 594.811848ms: waiting for machine to come up
	I0626 20:46:37.001827   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:37.002238   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:37.002264   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:37.002182   48187 retry.go:31] will retry after 1.067889284s: waiting for machine to come up
	I0626 20:46:38.071685   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:38.072135   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:38.072158   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:38.072094   48187 retry.go:31] will retry after 1.189834776s: waiting for machine to come up
	I0626 20:46:36.844137   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (2.041169028s)
	I0626 20:46:36.844171   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0626 20:46:36.844205   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.041210189s)
	I0626 20:46:36.844232   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0626 20:46:36.844245   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844257   47309 ssh_runner.go:235] Completed: which crictl: (1.868146562s)
	I0626 20:46:36.844293   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844300   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:36.844234   47309 ssh_runner.go:235] Completed: which crictl: (1.968483663s)
	I0626 20:46:36.844349   47309 ssh_runner.go:235] Completed: which crictl: (1.868154335s)
	I0626 20:46:36.844364   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:36.844380   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:36.844405   47309 ssh_runner.go:235] Completed: which crictl: (1.868235538s)
	I0626 20:46:36.844428   47309 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.411115015s)
	I0626 20:46:36.844448   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:36.844455   47309 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0626 20:46:36.844488   47309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:36.844513   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:39.895683   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.051359255s)
	I0626 20:46:39.895720   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0626 20:46:39.895808   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0: (3.051484848s)
	I0626 20:46:39.895824   47309 ssh_runner.go:235] Completed: which crictl: (3.051289954s)
	I0626 20:46:39.895855   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0626 20:46:39.895873   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1: (3.051494383s)
	I0626 20:46:39.895888   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:39.895908   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0626 20:46:39.895950   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:39.895909   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3: (3.051516174s)
	I0626 20:46:39.895990   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:39.896000   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3: (3.051535924s)
	I0626 20:46:39.896033   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0626 20:46:39.896034   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0626 20:46:39.896089   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.896102   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901778   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0626 20:46:39.901797   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901830   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.911439   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0626 20:46:39.911477   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0626 20:46:39.911517   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0626 20:46:39.943818   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0626 20:46:39.943947   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:41.278134   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.334156546s)
	I0626 20:46:41.278173   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0626 20:46:41.278135   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.376281957s)
	I0626 20:46:41.278187   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0626 20:46:41.278207   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:41.278256   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.263991   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:39.264402   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:39.264433   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:39.264371   48187 retry.go:31] will retry after 1.805262511s: waiting for machine to come up
	I0626 20:46:41.071232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:41.071707   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:41.071731   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:41.071662   48187 retry.go:31] will retry after 1.945519102s: waiting for machine to come up
	I0626 20:46:43.018581   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:43.019039   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:43.019075   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:43.018983   48187 retry.go:31] will retry after 2.83662877s: waiting for machine to come up
	I0626 20:46:43.745408   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.467115523s)
	I0626 20:46:43.745443   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0626 20:46:43.745479   47309 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:43.745551   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:45.011214   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.26563338s)
	I0626 20:46:45.011266   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0626 20:46:45.011296   47309 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.011349   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.858520   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:45.858992   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:45.859026   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:45.858941   48187 retry.go:31] will retry after 2.332305212s: waiting for machine to come up
	I0626 20:46:48.193085   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:48.193594   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:48.193625   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:48.193543   48187 retry.go:31] will retry after 2.846333425s: waiting for machine to come up
	I0626 20:46:52.634333   47779 start.go:369] acquired machines lock for "default-k8s-diff-port-473235" in 2m17.310683576s
	I0626 20:46:52.634385   47779 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:52.634413   47779 fix.go:54] fixHost starting: 
	I0626 20:46:52.634850   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:52.634890   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:52.654153   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0626 20:46:52.654638   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:52.655306   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:46:52.655337   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:52.655747   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:52.655952   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:46:52.656158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:46:52.657823   47779 fix.go:102] recreateIfNeeded on default-k8s-diff-port-473235: state=Stopped err=<nil>
	I0626 20:46:52.657850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	W0626 20:46:52.657997   47779 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:52.659722   47779 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-473235" ...
	I0626 20:46:51.043526   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044005   47605 main.go:141] libmachine: (embed-certs-299839) Found IP for machine: 192.168.39.51
	I0626 20:46:51.044034   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has current primary IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044045   47605 main.go:141] libmachine: (embed-certs-299839) Reserving static IP address...
	I0626 20:46:51.044351   47605 main.go:141] libmachine: (embed-certs-299839) Reserved static IP address: 192.168.39.51
	I0626 20:46:51.044368   47605 main.go:141] libmachine: (embed-certs-299839) Waiting for SSH to be available...
	I0626 20:46:51.044405   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.044439   47605 main.go:141] libmachine: (embed-certs-299839) DBG | skip adding static IP to network mk-embed-certs-299839 - found existing host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"}
	I0626 20:46:51.044456   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Getting to WaitForSSH function...
	I0626 20:46:51.046694   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047088   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.047121   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047312   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH client type: external
	I0626 20:46:51.047348   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa (-rw-------)
	I0626 20:46:51.047392   47605 main.go:141] libmachine: (embed-certs-299839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:51.047414   47605 main.go:141] libmachine: (embed-certs-299839) DBG | About to run SSH command:
	I0626 20:46:51.047432   47605 main.go:141] libmachine: (embed-certs-299839) DBG | exit 0
	I0626 20:46:51.137058   47605 main.go:141] libmachine: (embed-certs-299839) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:51.137408   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetConfigRaw
	I0626 20:46:51.197444   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.199920   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200306   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.200339   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200574   47605 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/config.json ...
	I0626 20:46:51.267260   47605 machine.go:88] provisioning docker machine ...
	I0626 20:46:51.267304   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:51.267709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.267921   47605 buildroot.go:166] provisioning hostname "embed-certs-299839"
	I0626 20:46:51.267943   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.268086   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.270429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270762   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.270790   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270886   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.271060   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271200   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271308   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.271475   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.271933   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.271950   47605 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-299839 && echo "embed-certs-299839" | sudo tee /etc/hostname
	I0626 20:46:51.403584   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-299839
	
	I0626 20:46:51.403622   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.406552   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.406876   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.406904   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.407053   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.407335   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407530   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407716   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.407883   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.408280   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.408300   47605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-299839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-299839/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-299839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:51.534666   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:51.534702   47605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:51.534745   47605 buildroot.go:174] setting up certificates
	I0626 20:46:51.534753   47605 provision.go:83] configureAuth start
	I0626 20:46:51.534766   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.535047   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.537753   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538113   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.538141   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.540471   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.540890   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.540922   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.541015   47605 provision.go:138] copyHostCerts
	I0626 20:46:51.541089   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:51.541099   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:51.541155   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:51.541237   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:51.541246   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:51.541277   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:51.541333   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:51.541339   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:51.541357   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:51.541434   47605 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-299839 san=[192.168.39.51 192.168.39.51 localhost 127.0.0.1 minikube embed-certs-299839]
	I0626 20:46:51.873317   47605 provision.go:172] copyRemoteCerts
	I0626 20:46:51.873396   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:51.873427   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.876293   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876659   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.876696   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876889   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.877100   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.877262   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.877430   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:51.970189   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:46:51.993067   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:52.015607   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0626 20:46:52.037359   47605 provision.go:86] duration metric: configureAuth took 502.581033ms
	I0626 20:46:52.037401   47605 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:52.037623   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:52.037714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.040949   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.041486   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041642   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.041859   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042061   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042235   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.042398   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.042916   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.042936   47605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:52.366045   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:52.366072   47605 machine.go:91] provisioned docker machine in 1.098783864s
	I0626 20:46:52.366083   47605 start.go:300] post-start starting for "embed-certs-299839" (driver="kvm2")
	I0626 20:46:52.366112   47605 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:52.366134   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.366443   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:52.366472   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.369138   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369570   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.369630   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369781   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.369957   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.370131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.370278   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.467055   47605 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:52.471203   47605 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:52.471222   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:52.471288   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:52.471394   47605 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:52.471510   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:52.484668   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.510268   47605 start.go:303] post-start completed in 144.162745ms
	I0626 20:46:52.510292   47605 fix.go:56] fixHost completed within 19.415851972s
	I0626 20:46:52.510315   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.513188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513629   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.513662   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513848   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.514062   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514228   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514415   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.514569   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.514968   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.514983   47605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:52.634177   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812412.582368193
	
	I0626 20:46:52.634199   47605 fix.go:206] guest clock: 1687812412.582368193
	I0626 20:46:52.634209   47605 fix.go:219] Guest: 2023-06-26 20:46:52.582368193 +0000 UTC Remote: 2023-06-26 20:46:52.510296584 +0000 UTC m=+163.430129249 (delta=72.071609ms)
	I0626 20:46:52.634237   47605 fix.go:190] guest clock delta is within tolerance: 72.071609ms
	I0626 20:46:52.634242   47605 start.go:83] releasing machines lock for "embed-certs-299839", held for 19.539848437s
	I0626 20:46:52.634277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.634623   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:52.637732   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638182   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.638220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638476   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639040   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639223   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639307   47605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:52.639346   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.639490   47605 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:52.639517   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.642288   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642923   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642968   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643016   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643351   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643492   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643528   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643564   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643763   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.643778   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643973   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643991   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.644109   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.644240   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.761230   47605 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:52.766865   47605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:52.919883   47605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:52.927218   47605 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:52.927290   47605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:52.948916   47605 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:52.948983   47605 start.go:466] detecting cgroup driver to use...
	I0626 20:46:52.949043   47605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:52.968673   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:52.982360   47605 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:52.982416   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:52.996984   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:53.015021   47605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:53.116692   47605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:53.251017   47605 docker.go:212] disabling docker service ...
	I0626 20:46:53.251096   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:53.268097   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:53.282223   47605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:53.412477   47605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:53.528110   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:53.541392   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:53.558736   47605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:53.558809   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.568482   47605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:53.568553   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.578178   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.587728   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.597231   47605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:53.606954   47605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:53.615250   47605 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:53.615308   47605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:53.628161   47605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:53.636477   47605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:53.755919   47605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:53.928744   47605 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:53.928823   47605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:53.934088   47605 start.go:534] Will wait 60s for crictl version
	I0626 20:46:53.934152   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:46:53.939345   47605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:53.971679   47605 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:53.971781   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.013494   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.062724   47605 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:54.064536   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:54.067854   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:54.068254   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068535   47605 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:54.072971   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:54.085981   47605 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:54.086048   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:52.661170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Start
	I0626 20:46:52.661331   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring networks are active...
	I0626 20:46:52.662042   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network default is active
	I0626 20:46:52.662444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network mk-default-k8s-diff-port-473235 is active
	I0626 20:46:52.663218   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Getting domain xml...
	I0626 20:46:52.663876   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Creating domain...
	I0626 20:46:53.987148   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting to get IP...
	I0626 20:46:53.988282   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988739   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988832   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:53.988735   48355 retry.go:31] will retry after 271.192351ms: waiting for machine to come up
	I0626 20:46:54.261343   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261825   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261857   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.261773   48355 retry.go:31] will retry after 362.262293ms: waiting for machine to come up
	I0626 20:46:54.625453   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625951   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625978   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.625859   48355 retry.go:31] will retry after 311.337455ms: waiting for machine to come up
	I0626 20:46:54.938519   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939023   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939053   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.938972   48355 retry.go:31] will retry after 446.154442ms: waiting for machine to come up
	I0626 20:46:52.039929   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.0285527s)
	I0626 20:46:52.039951   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0626 20:46:52.039974   47309 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.040015   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.786422   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0626 20:46:52.786468   47309 cache_images.go:123] Successfully loaded all cached images
	I0626 20:46:52.786474   47309 cache_images.go:92] LoadImages completed in 18.320847233s
	I0626 20:46:52.786562   47309 ssh_runner.go:195] Run: crio config
	I0626 20:46:52.857805   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:46:52.857833   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:52.857849   47309 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:52.857871   47309 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934450 NodeName:no-preload-934450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:52.858035   47309 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934450"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:52.858115   47309 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:52.858172   47309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:52.867179   47309 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:52.867253   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:52.875412   47309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 20:46:52.891376   47309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:52.906859   47309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0626 20:46:52.924927   47309 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:52.929059   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:52.942789   47309 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450 for IP: 192.168.50.38
	I0626 20:46:52.942825   47309 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:52.943011   47309 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:52.943059   47309 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:52.943138   47309 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.key
	I0626 20:46:52.943195   47309 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key.01da567d
	I0626 20:46:52.943236   47309 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key
	I0626 20:46:52.943341   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:52.943376   47309 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:52.943396   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:52.943435   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:52.943472   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:52.943509   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:52.943551   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.944147   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:52.971630   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:52.997892   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:53.024951   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 20:46:53.048462   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:53.075077   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:53.100318   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:53.129545   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:53.162187   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:53.191304   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:53.216166   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:53.240182   47309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:53.256447   47309 ssh_runner.go:195] Run: openssl version
	I0626 20:46:53.262053   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:53.272163   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277028   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277084   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.282611   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:53.296039   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:53.306923   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312778   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312825   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.320244   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:53.334066   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:53.347662   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353665   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353725   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.361150   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:53.374846   47309 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:53.380462   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:53.387949   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:53.393690   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:53.399208   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:53.405073   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:53.411265   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:53.417798   47309 kubeadm.go:404] StartCluster: {Name:no-preload-934450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiN
odeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:53.417916   47309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:53.417950   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:53.451231   47309 cri.go:89] found id: ""
	I0626 20:46:53.451307   47309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:53.460716   47309 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:53.460737   47309 kubeadm.go:636] restartCluster start
	I0626 20:46:53.460790   47309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:53.470518   47309 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.471961   47309 kubeconfig.go:92] found "no-preload-934450" server: "https://192.168.50.38:8443"
	I0626 20:46:53.475433   47309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:53.484054   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.484108   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:53.497348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.998070   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.998129   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.010119   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.498134   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.498223   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.512223   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.997432   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.997520   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.015317   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.497435   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.497516   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.512591   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.998180   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.998251   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.013135   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:56.497483   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.497573   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.512714   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.116295   47605 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:54.116360   47605 ssh_runner.go:195] Run: which lz4
	I0626 20:46:54.120344   47605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:46:54.124462   47605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:46:54.124490   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:46:55.959041   47605 crio.go:444] Took 1.838722 seconds to copy over tarball
	I0626 20:46:55.959115   47605 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:46:59.019532   47605 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060382374s)
	I0626 20:46:59.019555   47605 crio.go:451] Took 3.060486 seconds to extract the tarball
	I0626 20:46:59.019562   47605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:46:59.058687   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:59.102812   47605 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:46:59.102833   47605 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:46:59.102896   47605 ssh_runner.go:195] Run: crio config
	I0626 20:46:55.386479   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.386986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.387014   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:55.386901   48355 retry.go:31] will retry after 710.798834ms: waiting for machine to come up
	I0626 20:46:56.099580   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100079   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100112   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:56.100023   48355 retry.go:31] will retry after 921.187154ms: waiting for machine to come up
	I0626 20:46:57.022481   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022914   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.022859   48355 retry.go:31] will retry after 914.232442ms: waiting for machine to come up
	I0626 20:46:57.938375   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938823   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938845   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.938807   48355 retry.go:31] will retry after 1.411011331s: waiting for machine to come up
	I0626 20:46:59.351697   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352133   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352169   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:59.352076   48355 retry.go:31] will retry after 1.830031795s: waiting for machine to come up
	I0626 20:46:56.997450   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.997518   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.009310   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.497847   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.497929   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.513061   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.997474   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.997553   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.012610   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.498200   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.498274   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.513410   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.997938   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.998022   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.013357   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.497503   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.497581   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.514354   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.997445   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.997531   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.008894   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.497471   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.497555   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.508635   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.998326   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.998429   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.009836   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.498479   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.498593   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.510348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.159206   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:46:59.159236   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:59.159252   47605 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:59.159286   47605 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-299839 NodeName:embed-certs-299839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:59.159423   47605 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-299839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:59.159484   47605 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-299839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:59.159540   47605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:59.168802   47605 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:59.168882   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:59.177994   47605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0626 20:46:59.196041   47605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:59.214092   47605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0626 20:46:59.235187   47605 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:59.239440   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:59.251723   47605 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839 for IP: 192.168.39.51
	I0626 20:46:59.251772   47605 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:59.251943   47605 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:59.252017   47605 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:59.252134   47605 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/client.key
	I0626 20:46:59.252381   47605 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key.be9c3c95
	I0626 20:46:59.252482   47605 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key
	I0626 20:46:59.252626   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:59.252667   47605 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:59.252682   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:59.252718   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:59.252748   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:59.252805   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:59.252868   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:59.253555   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:59.280222   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:59.306244   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:59.331876   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:46:59.358710   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:59.385239   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:59.408963   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:59.433684   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:59.457235   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:59.480565   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:59.507918   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:59.532762   47605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:59.551283   47605 ssh_runner.go:195] Run: openssl version
	I0626 20:46:59.557079   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:59.568335   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573129   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573187   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.579116   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:59.589952   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:59.600935   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605668   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605735   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.611234   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:59.622615   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:59.633737   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638884   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638962   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.644559   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:59.655653   47605 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:59.660632   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:59.666672   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:59.672628   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:59.679194   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:59.685197   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:59.691190   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:59.697063   47605 kubeadm.go:404] StartCluster: {Name:embed-certs-299839 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:59.697146   47605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:59.697191   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:59.731197   47605 cri.go:89] found id: ""
	I0626 20:46:59.731256   47605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:59.741949   47605 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:59.741968   47605 kubeadm.go:636] restartCluster start
	I0626 20:46:59.742023   47605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:59.751837   47605 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.753347   47605 kubeconfig.go:92] found "embed-certs-299839" server: "https://192.168.39.51:8443"
	I0626 20:46:59.756955   47605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:59.766951   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.767023   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.779343   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.280064   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.280149   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.293730   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.780264   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.780347   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.793352   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.279827   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.279911   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.292843   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.779409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.779513   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.793293   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.279814   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.279902   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.296345   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.779892   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.779980   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.796346   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.280342   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.280417   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.292883   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.780156   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.780232   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.792667   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.184295   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184668   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184694   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:01.184605   48355 retry.go:31] will retry after 2.248796967s: waiting for machine to come up
	I0626 20:47:03.435559   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436054   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436086   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:03.435982   48355 retry.go:31] will retry after 2.012102985s: waiting for machine to come up
	I0626 20:47:01.998275   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.998353   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.014217   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.497731   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.497824   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.509505   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.998119   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.998202   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.009348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.485111   47309 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:03.485154   47309 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:03.485167   47309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:03.485216   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:03.516791   47309 cri.go:89] found id: ""
	I0626 20:47:03.516868   47309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:03.531523   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:03.540694   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:03.540761   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549498   47309 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549525   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:03.687202   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.779117   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.091878038s)
	I0626 20:47:04.779156   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.983470   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.059963   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.136199   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:05.136282   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:05.663265   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:06.163057   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:04.280330   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.280447   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.292565   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:04.780127   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.780225   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.797554   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.279900   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.279986   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.297853   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.779501   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.779594   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.794314   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.279916   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.280001   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.296829   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.779473   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.779566   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.793302   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.279802   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.279888   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.292407   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.779813   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.779914   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.793591   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.279846   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.279935   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.292196   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.779753   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.779859   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.792362   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.450681   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451186   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451216   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:05.451117   48355 retry.go:31] will retry after 3.442192384s: waiting for machine to come up
	I0626 20:47:08.895024   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:08.895520   48355 retry.go:31] will retry after 4.272351839s: waiting for machine to come up
	I0626 20:47:06.662926   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.163275   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.662871   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.689321   47309 api_server.go:72] duration metric: took 2.55312002s to wait for apiserver process to appear ...
	I0626 20:47:07.689348   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:07.689366   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:10.879412   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:10.879439   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:11.379823   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.386705   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.386736   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:11.880574   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.892733   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.892768   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:12.380392   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:12.389894   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:47:12.400274   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:12.400307   47309 api_server.go:131] duration metric: took 4.710951407s to wait for apiserver health ...
	I0626 20:47:12.400320   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:47:12.400332   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:12.402355   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:09.280409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:09.280512   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:09.293009   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:09.767593   47605 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:09.767636   47605 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:09.767648   47605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:09.767705   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:09.800380   47605 cri.go:89] found id: ""
	I0626 20:47:09.800465   47605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:09.819239   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:09.830482   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:09.830547   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840424   47605 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840451   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:09.979898   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.746785   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.960847   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.041569   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.122238   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:11.122322   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:11.640034   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.140386   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.640370   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.139901   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.639546   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.663848   47605 api_server.go:72] duration metric: took 2.54160148s to wait for apiserver process to appear ...
	I0626 20:47:13.663874   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:13.663905   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:14.587552   46683 start.go:369] acquired machines lock for "old-k8s-version-490377" in 55.268521785s
	I0626 20:47:14.587610   46683 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:47:14.587622   46683 fix.go:54] fixHost starting: 
	I0626 20:47:14.588035   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:47:14.588074   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:47:14.607186   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0626 20:47:14.607765   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:47:14.608361   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:47:14.608384   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:47:14.608697   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:47:14.608908   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:14.609056   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:47:14.610765   46683 fix.go:102] recreateIfNeeded on old-k8s-version-490377: state=Stopped err=<nil>
	I0626 20:47:14.610791   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	W0626 20:47:14.611905   46683 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:47:14.613885   46683 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490377" ...
	I0626 20:47:13.169996   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.170568   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Found IP for machine: 192.168.61.238
	I0626 20:47:13.170601   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserving static IP address...
	I0626 20:47:13.170622   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has current primary IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.171048   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.171080   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserved static IP address: 192.168.61.238
	I0626 20:47:13.171107   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | skip adding static IP to network mk-default-k8s-diff-port-473235 - found existing host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"}
	I0626 20:47:13.171128   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Getting to WaitForSSH function...
	I0626 20:47:13.171141   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for SSH to be available...
	I0626 20:47:13.173755   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174235   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.174265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174442   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH client type: external
	I0626 20:47:13.174485   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa (-rw-------)
	I0626 20:47:13.174518   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:13.174538   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | About to run SSH command:
	I0626 20:47:13.174553   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | exit 0
	I0626 20:47:13.265799   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:13.266189   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetConfigRaw
	I0626 20:47:13.266850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.269749   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270212   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.270253   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270498   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:47:13.270732   47779 machine.go:88] provisioning docker machine ...
	I0626 20:47:13.270758   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:13.270959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271112   47779 buildroot.go:166] provisioning hostname "default-k8s-diff-port-473235"
	I0626 20:47:13.271134   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.273679   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274087   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.274135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274273   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.274446   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274618   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274747   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.274940   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.275353   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.275369   47779 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-473235 && echo "default-k8s-diff-port-473235" | sudo tee /etc/hostname
	I0626 20:47:13.416565   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-473235
	
	I0626 20:47:13.416595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.420132   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420596   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.420670   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.421172   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421392   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.421821   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.422425   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.422457   47779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-473235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-473235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-473235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:13.566095   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:13.566131   47779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:13.566175   47779 buildroot.go:174] setting up certificates
	I0626 20:47:13.566192   47779 provision.go:83] configureAuth start
	I0626 20:47:13.566206   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.566509   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.569795   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570251   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.570283   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570476   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.573020   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573439   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.573475   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573704   47779 provision.go:138] copyHostCerts
	I0626 20:47:13.573782   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:13.573795   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:13.573859   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:13.573976   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:13.573987   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:13.574016   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:13.574094   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:13.574108   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:13.574134   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:13.574199   47779 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-473235 san=[192.168.61.238 192.168.61.238 localhost 127.0.0.1 minikube default-k8s-diff-port-473235]
	I0626 20:47:13.795155   47779 provision.go:172] copyRemoteCerts
	I0626 20:47:13.795207   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:13.795230   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.798039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798457   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.798512   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798706   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.798918   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.799130   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.799274   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:13.892185   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:13.921840   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 20:47:13.951311   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:13.980185   47779 provision.go:86] duration metric: configureAuth took 413.976937ms
	I0626 20:47:13.980216   47779 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:13.980460   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:47:13.980551   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.983814   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984217   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.984265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984604   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.984826   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985010   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985144   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.985344   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.985947   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.985979   47779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:14.317679   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:14.317702   47779 machine.go:91] provisioned docker machine in 1.046953094s
	I0626 20:47:14.317713   47779 start.go:300] post-start starting for "default-k8s-diff-port-473235" (driver="kvm2")
	I0626 20:47:14.317723   47779 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:14.317744   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.318064   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:14.318101   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.321001   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321358   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.321408   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321598   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.321806   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.321986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.322139   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.414722   47779 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:14.419797   47779 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:14.419822   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:14.419895   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:14.419990   47779 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:14.420118   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:14.430766   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:14.458086   47779 start.go:303] post-start completed in 140.355388ms
	I0626 20:47:14.458107   47779 fix.go:56] fixHost completed within 21.823695632s
	I0626 20:47:14.458125   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.460953   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461277   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.461308   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461472   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.461651   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.461841   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.462025   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.462175   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:14.462805   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:14.462823   47779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:14.587374   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812434.534091475
	
	I0626 20:47:14.587395   47779 fix.go:206] guest clock: 1687812434.534091475
	I0626 20:47:14.587403   47779 fix.go:219] Guest: 2023-06-26 20:47:14.534091475 +0000 UTC Remote: 2023-06-26 20:47:14.458110543 +0000 UTC m=+159.266861615 (delta=75.980932ms)
	I0626 20:47:14.587446   47779 fix.go:190] guest clock delta is within tolerance: 75.980932ms
	I0626 20:47:14.587456   47779 start.go:83] releasing machines lock for "default-k8s-diff-port-473235", held for 21.953095935s
	I0626 20:47:14.587492   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.587776   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:14.590654   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591111   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.591143   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591332   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.591869   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592074   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592151   47779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:14.592205   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.592451   47779 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:14.592489   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.595039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595271   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595585   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595615   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595659   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595698   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595901   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596076   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596118   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596311   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596344   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596466   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.596622   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.683637   47779 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:14.713738   47779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:14.869873   47779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:14.877719   47779 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:14.877815   47779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:14.893656   47779 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:14.893682   47779 start.go:466] detecting cgroup driver to use...
	I0626 20:47:14.893738   47779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:14.908419   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:14.921730   47779 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:14.921812   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:14.940659   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:14.955010   47779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:15.062849   47779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:15.193682   47779 docker.go:212] disabling docker service ...
	I0626 20:47:15.193810   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:15.210855   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:15.223362   47779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:15.348648   47779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:15.471398   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:15.496137   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:15.523967   47779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:47:15.524041   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.537188   47779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:15.537258   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.550404   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.563577   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.574958   47779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:15.588685   47779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:15.600611   47779 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:15.600680   47779 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:15.615658   47779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:15.628004   47779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:15.763410   47779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:15.982719   47779 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:15.982799   47779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:15.990799   47779 start.go:534] Will wait 60s for crictl version
	I0626 20:47:15.990864   47779 ssh_runner.go:195] Run: which crictl
	I0626 20:47:15.997709   47779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:16.041802   47779 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:16.041893   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.094989   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.151324   47779 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:47:12.403841   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:12.420028   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:12.459593   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:12.486209   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:12.486256   47309 system_pods.go:61] "coredns-5d78c9869d-dwkng" [8919aa0b-b8b6-4672-aa75-ea5ea1d27ef6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:12.486270   47309 system_pods.go:61] "etcd-no-preload-934450" [67a1367b-dc99-4613-8a75-796a64f13f0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:12.486281   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [7452cf79-3e8f-4dce-922a-a52115c7059f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:12.486291   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [a3393645-4d3d-4fab-a32f-c15ff3bfcdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:12.486300   47309 system_pods.go:61] "kube-proxy-phrv2" [d08fdd52-cc2a-43cb-84c4-170ad241527e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:12.486310   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [cc1c89f8-925a-4847-b693-08fbc4905119] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:12.486319   47309 system_pods.go:61] "metrics-server-74d5c6b9c-7szm5" [d94c68f7-4521-4366-b5db-38f420a78dd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:12.486331   47309 system_pods.go:61] "storage-provisioner" [7aa74f96-c306-4d70-a211-715b4877b15b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:12.486341   47309 system_pods.go:74] duration metric: took 26.722879ms to wait for pod list to return data ...
	I0626 20:47:12.486359   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:12.490745   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:12.490784   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:12.490809   47309 node_conditions.go:105] duration metric: took 4.437855ms to run NodePressure ...
	I0626 20:47:12.490830   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:12.794912   47309 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800827   47309 kubeadm.go:787] kubelet initialised
	I0626 20:47:12.800855   47309 kubeadm.go:788] duration metric: took 5.915334ms waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800865   47309 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:12.807162   47309 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:14.822450   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:14.614985   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Start
	I0626 20:47:14.615159   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring networks are active...
	I0626 20:47:14.615866   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network default is active
	I0626 20:47:14.616331   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network mk-old-k8s-version-490377 is active
	I0626 20:47:14.616785   46683 main.go:141] libmachine: (old-k8s-version-490377) Getting domain xml...
	I0626 20:47:14.617507   46683 main.go:141] libmachine: (old-k8s-version-490377) Creating domain...
	I0626 20:47:16.055502   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting to get IP...
	I0626 20:47:16.056448   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.056913   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.057009   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.056935   48478 retry.go:31] will retry after 281.770624ms: waiting for machine to come up
	I0626 20:47:16.340685   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.341472   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.341496   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.341268   48478 retry.go:31] will retry after 249.185886ms: waiting for machine to come up
	I0626 20:47:16.591867   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.592547   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.592718   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.592671   48478 retry.go:31] will retry after 327.814159ms: waiting for machine to come up
	I0626 20:47:17.910025   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:17.910061   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:18.411167   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.425310   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.425345   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:18.910567   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.920897   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.920933   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:19.410736   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:19.418228   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:47:19.428516   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:19.428551   47605 api_server.go:131] duration metric: took 5.764669652s to wait for apiserver health ...
	I0626 20:47:19.428561   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:47:19.428573   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:19.430711   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:16.152563   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:16.156250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156617   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:16.156644   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156894   47779 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:16.162480   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:16.180283   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:47:16.180336   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:16.227399   47779 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:47:16.227474   47779 ssh_runner.go:195] Run: which lz4
	I0626 20:47:16.233720   47779 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:16.240423   47779 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:16.240463   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:47:18.263416   47779 crio.go:444] Took 2.029753 seconds to copy over tarball
	I0626 20:47:18.263515   47779 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:16.837607   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:19.361799   47309 pod_ready.go:92] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.361869   47309 pod_ready.go:81] duration metric: took 6.554677083s waiting for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.361886   47309 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370122   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.370145   47309 pod_ready.go:81] duration metric: took 8.249243ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370157   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391052   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:21.391082   47309 pod_ready.go:81] duration metric: took 2.020917194s waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391096   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:16.922381   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.922923   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.922952   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.922873   48478 retry.go:31] will retry after 486.21568ms: waiting for machine to come up
	I0626 20:47:17.410676   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:17.411282   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:17.411305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:17.411227   48478 retry.go:31] will retry after 606.277374ms: waiting for machine to come up
	I0626 20:47:18.020296   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.021367   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.021400   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.021287   48478 retry.go:31] will retry after 576.843487ms: waiting for machine to come up
	I0626 20:47:18.599674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.600326   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.600352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.600221   48478 retry.go:31] will retry after 857.329718ms: waiting for machine to come up
	I0626 20:47:19.459545   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:19.460101   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:19.460125   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:19.460050   48478 retry.go:31] will retry after 1.017747035s: waiting for machine to come up
	I0626 20:47:20.479538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:20.480140   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:20.480178   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:20.480043   48478 retry.go:31] will retry after 1.379789146s: waiting for machine to come up
	I0626 20:47:19.432325   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:19.461944   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:19.498519   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:19.512703   47605 system_pods.go:59] 9 kube-system pods found
	I0626 20:47:19.512831   47605 system_pods.go:61] "coredns-5d78c9869d-dz48f" [87a67e95-a071-4865-902b-0e401e852456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512860   47605 system_pods.go:61] "coredns-5d78c9869d-lbfsr" [adee7e6b-88b2-412e-bb2d-fc0939bca149] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512905   47605 system_pods.go:61] "etcd-embed-certs-299839" [8aefd012-6a54-4e75-afc9-cc8385212eb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:19.512937   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [e178b5e8-445c-444f-965e-051233c2fa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:19.512971   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [e965e4af-a673-4b93-bb63-e7bfc0f9514d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:19.512995   47605 system_pods.go:61] "kube-proxy-q5khr" [6c11d667-3490-4417-8e0c-373fe25d06b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:19.513014   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [0385958c-3f22-4eb8-bdac-cbaeb52fe9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:19.513050   47605 system_pods.go:61] "metrics-server-74d5c6b9c-gb6b2" [b5a15d68-23ee-4274-a147-db6f2eef97e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:19.513074   47605 system_pods.go:61] "storage-provisioner" [42bd8483-f594-4bf9-8c32-9688d1d99523] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:19.513093   47605 system_pods.go:74] duration metric: took 14.550735ms to wait for pod list to return data ...
	I0626 20:47:19.513125   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:19.519356   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:19.519455   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:19.519513   47605 node_conditions.go:105] duration metric: took 6.36764ms to run NodePressure ...
	I0626 20:47:19.519573   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:19.935407   47605 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943592   47605 kubeadm.go:787] kubelet initialised
	I0626 20:47:19.943622   47605 kubeadm.go:788] duration metric: took 8.187833ms waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943633   47605 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:19.951319   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.957985   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958016   47605 pod_ready.go:81] duration metric: took 6.605612ms waiting for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.958027   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958037   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.965229   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965312   47605 pod_ready.go:81] duration metric: took 7.251456ms waiting for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.965335   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965391   47605 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:22.010596   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:21.752755   47779 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.48920102s)
	I0626 20:47:21.752790   47779 crio.go:451] Took 3.489344 seconds to extract the tarball
	I0626 20:47:21.752802   47779 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:21.800026   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:21.844486   47779 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:47:21.844504   47779 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:47:21.844573   47779 ssh_runner.go:195] Run: crio config
	I0626 20:47:21.924367   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:21.924397   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:21.924411   47779 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:21.924431   47779 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-473235 NodeName:default-k8s-diff-port-473235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:47:21.924593   47779 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-473235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:21.924685   47779 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-473235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0626 20:47:21.924756   47779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:47:21.934851   47779 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:21.934951   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:21.944791   47779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0626 20:47:21.963087   47779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:21.981936   47779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0626 20:47:22.002207   47779 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:22.006443   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:22.019555   47779 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235 for IP: 192.168.61.238
	I0626 20:47:22.019591   47779 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:22.019794   47779 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:22.019859   47779 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:22.019983   47779 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.key
	I0626 20:47:22.020069   47779 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key.761b3e7f
	I0626 20:47:22.020126   47779 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key
	I0626 20:47:22.020257   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:22.020296   47779 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:22.020309   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:22.020340   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:22.020376   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:22.020418   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:22.020475   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:22.021354   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:22.045205   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:22.069269   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:22.092387   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:22.120395   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:22.143199   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:22.167864   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:22.192223   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:22.218085   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:22.243249   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:22.269200   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:22.294015   47779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:22.313139   47779 ssh_runner.go:195] Run: openssl version
	I0626 20:47:22.319998   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:22.330864   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337082   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337144   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.343158   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:22.354507   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:22.366438   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371070   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371127   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.376858   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:22.387928   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:22.398665   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403091   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403139   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.410314   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:22.421729   47779 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:22.426373   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:22.432450   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:22.438093   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:22.446065   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:22.452103   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:22.457940   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:22.464492   47779 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:22.464647   47779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:22.464707   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:22.497723   47779 cri.go:89] found id: ""
	I0626 20:47:22.497803   47779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:22.508914   47779 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:22.508940   47779 kubeadm.go:636] restartCluster start
	I0626 20:47:22.508994   47779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:22.519855   47779 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:22.521400   47779 kubeconfig.go:92] found "default-k8s-diff-port-473235" server: "https://192.168.61.238:8444"
	I0626 20:47:22.525126   47779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:22.536252   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:22.536311   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:22.548698   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.049731   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.049805   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.062575   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.548966   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.549050   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.566351   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.048839   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.048917   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.065016   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.549110   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.549211   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.563150   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:25.049739   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.049828   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.066148   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.496598   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.496624   47309 pod_ready.go:81] duration metric: took 2.105519396s waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.496637   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504045   47309 pod_ready.go:92] pod "kube-proxy-phrv2" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.504067   47309 pod_ready.go:81] duration metric: took 7.42294ms waiting for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504078   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022096   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:25.022123   47309 pod_ready.go:81] duration metric: took 1.518037516s waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022135   47309 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.861798   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:21.981234   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:21.981272   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:21.862292   48478 retry.go:31] will retry after 2.138021733s: waiting for machine to come up
	I0626 20:47:24.002651   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:24.003184   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:24.003215   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:24.003122   48478 retry.go:31] will retry after 2.016131828s: waiting for machine to come up
	I0626 20:47:26.020987   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:26.021487   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:26.021511   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:26.021427   48478 retry.go:31] will retry after 2.317082546s: waiting for machine to come up
	I0626 20:47:24.497636   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:26.997525   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:27.997348   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:27.997394   47605 pod_ready.go:81] duration metric: took 8.031967272s waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:27.997408   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.548979   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.549054   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.566040   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.049569   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.049636   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.061513   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.548864   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.548952   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.566095   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.049674   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.049818   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.067169   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.549748   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.549831   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.568977   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.048852   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.048921   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.064935   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.549510   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.549614   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.562781   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.049396   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.049482   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.063237   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.548762   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.548853   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.561289   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:30.048758   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.048832   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.061079   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.040010   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:29.536317   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.537367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:28.340238   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:28.340738   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:28.340774   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:28.340660   48478 retry.go:31] will retry after 3.9887538s: waiting for machine to come up
	I0626 20:47:30.014224   47605 pod_ready.go:102] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.016636   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.016660   47605 pod_ready.go:81] duration metric: took 3.019245103s waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.016669   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022769   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.022794   47605 pod_ready.go:81] duration metric: took 6.118745ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022806   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.031975   47605 pod_ready.go:92] pod "kube-proxy-q5khr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.032004   47605 pod_ready.go:81] duration metric: took 9.189713ms waiting for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.032015   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040203   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.040231   47605 pod_ready.go:81] duration metric: took 8.207477ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040244   47605 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:33.054175   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:30.549812   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.549897   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.562540   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.049000   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.049071   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.061358   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.549602   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.549664   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.562690   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.049131   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:32.049223   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:32.061951   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.536775   47779 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:32.536827   47779 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:32.536843   47779 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:32.536914   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:32.571353   47779 cri.go:89] found id: ""
	I0626 20:47:32.571434   47779 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:32.588931   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:32.599519   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:32.599585   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610183   47779 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610212   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:32.738386   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.418561   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.612946   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.740311   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.830927   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:33.830992   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.372343   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.872109   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:33.542864   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:36.037521   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:32.332668   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:32.333139   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:32.333169   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:32.333084   48478 retry.go:31] will retry after 3.571549947s: waiting for machine to come up
	I0626 20:47:35.906478   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.906962   46683 main.go:141] libmachine: (old-k8s-version-490377) Found IP for machine: 192.168.72.111
	I0626 20:47:35.906994   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has current primary IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.907004   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserving static IP address...
	I0626 20:47:35.907527   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.907573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | skip adding static IP to network mk-old-k8s-version-490377 - found existing host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"}
	I0626 20:47:35.907588   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserved static IP address: 192.168.72.111
	I0626 20:47:35.907605   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting for SSH to be available...
	I0626 20:47:35.907658   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Getting to WaitForSSH function...
	I0626 20:47:35.909932   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910346   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.910383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH client type: external
	I0626 20:47:35.910573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa (-rw-------)
	I0626 20:47:35.910604   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:35.910620   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | About to run SSH command:
	I0626 20:47:35.910635   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | exit 0
	I0626 20:47:36.006056   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:36.006429   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetConfigRaw
	I0626 20:47:36.007160   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.010144   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010519   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.010551   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010863   46683 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/config.json ...
	I0626 20:47:36.011106   46683 machine.go:88] provisioning docker machine ...
	I0626 20:47:36.011130   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.011366   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011542   46683 buildroot.go:166] provisioning hostname "old-k8s-version-490377"
	I0626 20:47:36.011561   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011705   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.014236   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014643   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.014674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014821   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.015013   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015156   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015371   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.015595   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.016010   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.016029   46683 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490377 && echo "old-k8s-version-490377" | sudo tee /etc/hostname
	I0626 20:47:36.160735   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490377
	
	I0626 20:47:36.160797   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.163857   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164373   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.164425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164566   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.164778   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.164983   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.165128   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.165311   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.166001   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.166030   46683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:36.302740   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:36.302789   46683 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:36.302839   46683 buildroot.go:174] setting up certificates
	I0626 20:47:36.302852   46683 provision.go:83] configureAuth start
	I0626 20:47:36.302868   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.303151   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.305958   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306411   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.306439   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306667   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.309069   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309447   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.309480   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309538   46683 provision.go:138] copyHostCerts
	I0626 20:47:36.309622   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:36.309635   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:36.309702   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:36.309813   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:36.309830   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:36.309868   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:36.309938   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:36.309947   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:36.309970   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:36.310026   46683 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490377 san=[192.168.72.111 192.168.72.111 localhost 127.0.0.1 minikube old-k8s-version-490377]
	I0626 20:47:36.441131   46683 provision.go:172] copyRemoteCerts
	I0626 20:47:36.441183   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:36.441204   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.444557   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445034   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.445067   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445311   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.445540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.445700   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.445857   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:36.542375   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:36.570185   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:47:36.596725   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:36.622954   46683 provision.go:86] duration metric: configureAuth took 320.087643ms
	I0626 20:47:36.622983   46683 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:36.623205   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:47:36.623301   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.626305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626634   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.626666   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626856   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.627048   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627224   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627349   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.627520   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.627929   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.627954   46683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:36.963666   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:36.963695   46683 machine.go:91] provisioned docker machine in 952.57418ms
	I0626 20:47:36.963707   46683 start.go:300] post-start starting for "old-k8s-version-490377" (driver="kvm2")
	I0626 20:47:36.963719   46683 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:36.963747   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.964067   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:36.964099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.966948   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.967383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967528   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.967735   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.967900   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.968052   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.070309   46683 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:37.075040   46683 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:37.075064   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:37.075125   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:37.075208   46683 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:37.075306   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:37.086362   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:37.110475   46683 start.go:303] post-start completed in 146.752359ms
	I0626 20:47:37.110502   46683 fix.go:56] fixHost completed within 22.522880386s
	I0626 20:47:37.110525   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.113530   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.113925   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.113961   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.114168   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.114372   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114577   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114730   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.114896   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:37.115549   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:37.115572   46683 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:37.247352   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812457.183569581
	
	I0626 20:47:37.247376   46683 fix.go:206] guest clock: 1687812457.183569581
	I0626 20:47:37.247386   46683 fix.go:219] Guest: 2023-06-26 20:47:37.183569581 +0000 UTC Remote: 2023-06-26 20:47:37.110506986 +0000 UTC m=+360.350082215 (delta=73.062595ms)
	I0626 20:47:37.247410   46683 fix.go:190] guest clock delta is within tolerance: 73.062595ms
	I0626 20:47:37.247416   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 22.659832787s
	I0626 20:47:37.247442   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.247723   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:37.250740   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251154   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.251194   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251316   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.251835   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252015   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252101   46683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:37.252144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.252251   46683 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:37.252273   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.255147   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255231   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255440   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255464   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255584   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.255756   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.255765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255792   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255930   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.255946   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.256080   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.256099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.256206   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.256301   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.370571   46683 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:37.376548   46683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:37.531359   46683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:37.540038   46683 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:37.540104   46683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:37.556531   46683 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:37.556554   46683 start.go:466] detecting cgroup driver to use...
	I0626 20:47:37.556620   46683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:37.574430   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:37.586766   46683 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:37.586829   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:37.599572   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:37.612901   46683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:37.717489   46683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:37.851503   46683 docker.go:212] disabling docker service ...
	I0626 20:47:37.851576   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:37.864932   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:37.877087   46683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:37.990007   46683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:38.107613   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:38.122183   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:38.141502   46683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:47:38.141567   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.152052   46683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:38.152128   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.161786   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.172779   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.182823   46683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:38.192695   46683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:38.201322   46683 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:38.201404   46683 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:38.213549   46683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:38.225080   46683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:38.336249   46683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:38.508323   46683 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:38.508443   46683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:38.514430   46683 start.go:534] Will wait 60s for crictl version
	I0626 20:47:38.514496   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:38.518918   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:38.559642   46683 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:38.559731   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.616720   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.678573   46683 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0626 20:47:35.555132   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.053446   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:35.373039   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.872006   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.895929   47779 api_server.go:72] duration metric: took 2.064992302s to wait for apiserver process to appear ...
	I0626 20:47:35.895959   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:35.895982   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:35.896602   47779 api_server.go:269] stopped: https://192.168.61.238:8444/healthz: Get "https://192.168.61.238:8444/healthz": dial tcp 192.168.61.238:8444: connect: connection refused
	I0626 20:47:36.397305   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.868801   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.868839   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.868854   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.907251   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.907280   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.907310   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.921394   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.921428   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:40.397045   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.405040   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.405071   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:40.897690   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.904374   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.904424   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:41.396883   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:41.404743   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:47:41.420191   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:41.420219   47779 api_server.go:131] duration metric: took 5.524252602s to wait for apiserver health ...
	I0626 20:47:41.420231   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:41.420249   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:41.422187   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:38.537628   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:40.538267   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.680019   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:38.682934   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683263   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:38.683294   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683534   46683 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:38.687976   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:38.701534   46683 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 20:47:38.701610   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:38.739497   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:38.739584   46683 ssh_runner.go:195] Run: which lz4
	I0626 20:47:38.744080   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:38.748755   46683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:38.748792   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0626 20:47:40.654759   46683 crio.go:444] Took 1.910714 seconds to copy over tarball
	I0626 20:47:40.654830   46683 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:40.057751   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:42.555707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:41.423617   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:41.447117   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:41.485897   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:41.505667   47779 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:41.505714   47779 system_pods.go:61] "coredns-5d78c9869d-78zrr" [2927dce3-aa13-4ed4-b5a4-bc1b101ec044] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:41.505730   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [5bbba401-cfdd-4e97-ac44-3d1410344b23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:41.505742   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [90d064bc-d31f-4690-b100-8979cdd518c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:41.505755   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [3f686efe-3c90-42ed-a1b9-2cda3e7e49b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:41.505773   47779 system_pods.go:61] "kube-proxy-7t2dk" [bebeb55d-8c7d-4543-9ee1-adbd946904f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:41.505786   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [c2436cf6-0128-425c-9db3-b3d01e5fb5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:41.505799   47779 system_pods.go:61] "metrics-server-74d5c6b9c-swcxn" [81e42c6b-4c7d-40b1-bd4a-ccf7ce2dea17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:41.505811   47779 system_pods.go:61] "storage-provisioner" [18d1c7dc-00a6-4842-b441-f3468adde4ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:41.505822   47779 system_pods.go:74] duration metric: took 19.895923ms to wait for pod list to return data ...
	I0626 20:47:41.505833   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:41.515165   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:41.515201   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:41.515215   47779 node_conditions.go:105] duration metric: took 9.372368ms to run NodePressure ...
	I0626 20:47:41.515243   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:41.848353   47779 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854780   47779 kubeadm.go:787] kubelet initialised
	I0626 20:47:41.854805   47779 kubeadm.go:788] duration metric: took 6.420882ms waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854814   47779 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:41.861323   47779 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.867181   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867214   47779 pod_ready.go:81] duration metric: took 5.86597ms waiting for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.867225   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867235   47779 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.872900   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872928   47779 pod_ready.go:81] duration metric: took 5.684109ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.872940   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872948   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.881471   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881501   47779 pod_ready.go:81] duration metric: took 8.543041ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.881513   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881531   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.892246   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892292   47779 pod_ready.go:81] duration metric: took 10.741136ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.892310   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892325   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297272   47779 pod_ready.go:92] pod "kube-proxy-7t2dk" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:43.297299   47779 pod_ready.go:81] duration metric: took 1.404965565s waiting for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297308   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:42.544224   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.846930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.389432   46683 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.73456858s)
	I0626 20:47:44.389462   46683 crio.go:451] Took 3.734677 seconds to extract the tarball
	I0626 20:47:44.389480   46683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:44.438169   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:44.478220   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:44.478250   46683 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:47:44.478337   46683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.478364   46683 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.478383   46683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.478384   46683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.478450   46683 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0626 20:47:44.478365   46683 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.478345   46683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.478339   46683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479752   46683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.479758   46683 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.479760   46683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.479759   46683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.479748   46683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.479802   46683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.479810   46683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479817   46683 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0626 20:47:44.681554   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720619   46683 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0626 20:47:44.720677   46683 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720730   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.724810   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.753258   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0626 20:47:44.765072   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.767167   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.768723   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0626 20:47:44.769466   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.769474   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.807428   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.904206   46683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0626 20:47:44.904243   46683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0626 20:47:44.904250   46683 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.904261   46683 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926166   46683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0626 20:47:44.926203   46683 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.926204   46683 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0626 20:47:44.926222   46683 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.926222   46683 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0626 20:47:44.926248   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926247   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926251   46683 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0626 20:47:44.926365   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936135   46683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0626 20:47:44.936175   46683 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.936236   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936252   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.936274   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.940272   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.940352   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0626 20:47:44.940409   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.952147   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:45.031640   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0626 20:47:45.031677   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0626 20:47:45.061947   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0626 20:47:45.062070   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0626 20:47:45.062166   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0626 20:47:45.062261   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.062279   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0626 20:47:45.067511   46683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0626 20:47:45.067561   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0626 20:47:45.094726   46683 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.094780   46683 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.384887   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:45.947601   46683 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0626 20:47:45.947707   46683 cache_images.go:92] LoadImages completed in 1.469441722s
	W0626 20:47:45.947778   46683 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0626 20:47:45.947863   46683 ssh_runner.go:195] Run: crio config
	I0626 20:47:46.009928   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:47:46.009955   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:46.009968   46683 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:46.009987   46683 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490377 NodeName:old-k8s-version-490377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 20:47:46.010140   46683 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490377"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-490377
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.111:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:46.010224   46683 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490377 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:47:46.010284   46683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0626 20:47:46.023111   46683 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:46.023196   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:46.034988   46683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0626 20:47:46.056824   46683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:46.077802   46683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0626 20:47:46.102465   46683 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:46.107391   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:46.121242   46683 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377 for IP: 192.168.72.111
	I0626 20:47:46.121277   46683 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:46.121466   46683 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:46.121520   46683 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:46.121635   46683 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.key
	I0626 20:47:46.121735   46683 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key.760f2aeb
	I0626 20:47:46.121789   46683 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key
	I0626 20:47:46.121928   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:46.121970   46683 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:46.121985   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:46.122024   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:46.122063   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:46.122098   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:46.122158   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:46.123026   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:46.149101   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:46.179305   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:46.207421   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:46.233407   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:46.259148   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:46.284728   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:46.312152   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:46.341061   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:46.370455   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:46.398160   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:46.424710   46683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:46.446379   46683 ssh_runner.go:195] Run: openssl version
	I0626 20:47:46.452825   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:46.466808   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472676   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472760   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.479077   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:46.490061   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:46.501801   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.506966   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.507034   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.513146   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:46.523600   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:46.534659   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540612   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540677   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.548499   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:46.562786   46683 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:46.569679   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:46.576129   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:46.582331   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:46.588334   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:46.595635   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:46.603058   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:46.611126   46683 kubeadm.go:404] StartCluster: {Name:old-k8s-version-490377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:46.611211   46683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:46.611277   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:46.650099   46683 cri.go:89] found id: ""
	I0626 20:47:46.650177   46683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:46.660940   46683 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:46.660964   46683 kubeadm.go:636] restartCluster start
	I0626 20:47:46.661022   46683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:46.671400   46683 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:46.672450   46683 kubeconfig.go:92] found "old-k8s-version-490377" server: "https://192.168.72.111:8443"
	I0626 20:47:46.675477   46683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:46.684496   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:46.684568   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:46.695719   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:45.056085   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.554295   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:45.865956   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:48.003697   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.505286   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:49.505314   47779 pod_ready.go:81] duration metric: took 6.207998312s waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:49.505328   47779 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:47.037142   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.037207   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.535460   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.196149   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.196252   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.211751   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:47.696286   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.696381   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.707472   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.195967   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.196041   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.207809   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.696375   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.696449   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.708571   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.196097   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.196176   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.207717   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.696692   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.696768   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.708954   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.196531   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.196611   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.209111   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.696563   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.696648   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.708744   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.196237   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.196305   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.207654   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.695908   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.695988   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.708029   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.056186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.057083   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.519442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.520019   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.536833   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.036673   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.196170   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.196233   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.208953   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:52.696518   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.696600   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.707537   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.196046   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.196113   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.207272   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.695791   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.695873   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.706845   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.196452   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.196530   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.208048   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.696169   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.696236   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.707640   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.195889   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.195968   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.207560   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.695899   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.695978   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.707573   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.195900   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:56.195973   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:56.207335   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.685138   46683 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:56.685165   46683 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:56.685180   46683 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:56.685239   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:56.719427   46683 cri.go:89] found id: ""
	I0626 20:47:56.719494   46683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:56.735328   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:56.747355   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:56.747420   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756129   46683 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756156   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:54.554213   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:57.052902   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:59.055349   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.018337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.025514   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.039195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.538216   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.883656   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.423073   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.641018   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.751205   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.840521   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:57.840645   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.355178   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.854929   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.355164   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.385611   46683 api_server.go:72] duration metric: took 1.545094971s to wait for apiserver process to appear ...
	I0626 20:47:59.385632   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:59.385650   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:01.553510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.554922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.520442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.021809   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.040767   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.535801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:04.386860   46683 api_server.go:269] stopped: https://192.168.72.111:8443/healthz: Get "https://192.168.72.111:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0626 20:48:04.888001   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:05.958461   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:48:05.958486   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:48:05.958498   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.017029   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.017061   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.387577   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.394038   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.394072   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.887033   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.902891   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.902931   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:07.387632   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:07.393827   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:48:07.402591   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:48:07.402618   46683 api_server.go:131] duration metric: took 8.016980167s to wait for apiserver health ...
	I0626 20:48:07.402628   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:48:07.402639   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:48:07.404494   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:48:06.054185   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:08.055165   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.520306   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.521293   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:10.021358   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.537058   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:09.537801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.405919   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:48:07.416748   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:48:07.436249   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:48:07.445695   46683 system_pods.go:59] 7 kube-system pods found
	I0626 20:48:07.445732   46683 system_pods.go:61] "coredns-5644d7b6d9-5lcxw" [8e1a5fff-55d8-4d32-ae6f-c7694c8b5878] Running
	I0626 20:48:07.445741   46683 system_pods.go:61] "etcd-old-k8s-version-490377" [3fff7ab3-7ac7-4417-b3b8-9794f427c880] Running
	I0626 20:48:07.445750   46683 system_pods.go:61] "kube-apiserver-old-k8s-version-490377" [1b8e6b87-0b15-4586-8133-2dd33ac0b069] Running
	I0626 20:48:07.445771   46683 system_pods.go:61] "kube-controller-manager-old-k8s-version-490377" [2635a03c-884d-4245-a8ef-cb02e14443b8] Running
	I0626 20:48:07.445792   46683 system_pods.go:61] "kube-proxy-64btm" [0a8ee3c6-93a1-4989-94d0-209e8c655a64] Running
	I0626 20:48:07.445805   46683 system_pods.go:61] "kube-scheduler-old-k8s-version-490377" [2a6905a0-4f64-4cab-9b6d-55c708c07f8d] Running
	I0626 20:48:07.445815   46683 system_pods.go:61] "storage-provisioner" [9bf36874-b862-41f9-89d4-2d900adc2003] Running
	I0626 20:48:07.445826   46683 system_pods.go:74] duration metric: took 9.553318ms to wait for pod list to return data ...
	I0626 20:48:07.445836   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:48:07.450777   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:48:07.450816   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:48:07.450831   46683 node_conditions.go:105] duration metric: took 4.985221ms to run NodePressure ...
	I0626 20:48:07.450854   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:48:07.693070   46683 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:48:07.696336   46683 retry.go:31] will retry after 291.332727ms: kubelet not initialised
	I0626 20:48:07.992856   46683 retry.go:31] will retry after 210.561512ms: kubelet not initialised
	I0626 20:48:08.208369   46683 retry.go:31] will retry after 371.110023ms: kubelet not initialised
	I0626 20:48:08.585342   46683 retry.go:31] will retry after 1.199452561s: kubelet not initialised
	I0626 20:48:09.790625   46683 retry.go:31] will retry after 923.734482ms: kubelet not initialised
	I0626 20:48:10.719166   46683 retry.go:31] will retry after 1.019822632s: kubelet not initialised
	I0626 20:48:11.743554   46683 retry.go:31] will retry after 3.253867153s: kubelet not initialised
	I0626 20:48:10.552964   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.554534   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.520923   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.019384   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.036991   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:14.536734   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.002028   46683 retry.go:31] will retry after 2.234934883s: kubelet not initialised
	I0626 20:48:14.556223   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.053741   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.054276   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.021470   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.519794   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.036192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.036285   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:21.037136   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.242709   46683 retry.go:31] will retry after 6.079359776s: kubelet not initialised
	I0626 20:48:21.054851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.553653   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:22.020435   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:24.022102   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.037271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:25.037337   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.328332   46683 retry.go:31] will retry after 12.999865358s: kubelet not initialised
	I0626 20:48:25.553983   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.052253   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:26.518782   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.520217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:27.535792   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:29.536336   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:30.055419   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.553794   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:31.018773   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:33.020048   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:35.021492   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.036513   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:34.037364   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.535663   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.334795   46683 retry.go:31] will retry after 13.541680893s: kubelet not initialised
	I0626 20:48:35.052975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.053634   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.053672   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.519603   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.520279   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:38.536271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:40.536344   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.553411   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.554235   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.520569   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.522354   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:42.536811   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.035291   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.554795   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.053080   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:46.019919   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.021534   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:47.036908   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.537386   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.882566   46683 kubeadm.go:787] kubelet initialised
	I0626 20:48:49.882597   46683 kubeadm.go:788] duration metric: took 42.189498896s waiting for restarted kubelet to initialise ...
	I0626 20:48:49.882608   46683 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:48:49.888018   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894462   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.894488   46683 pod_ready.go:81] duration metric: took 6.438689ms waiting for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894501   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899336   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.899358   46683 pod_ready.go:81] duration metric: took 4.848554ms waiting for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899370   46683 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903574   46683 pod_ready.go:92] pod "etcd-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.903593   46683 pod_ready.go:81] duration metric: took 4.21548ms waiting for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903605   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908052   46683 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.908071   46683 pod_ready.go:81] duration metric: took 4.457812ms waiting for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908091   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281099   46683 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.281124   46683 pod_ready.go:81] duration metric: took 373.02512ms waiting for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281139   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681520   46683 pod_ready.go:92] pod "kube-proxy-64btm" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.681541   46683 pod_ready.go:81] duration metric: took 400.395983ms waiting for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681552   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081638   46683 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:51.081657   46683 pod_ready.go:81] duration metric: took 400.09969ms waiting for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081666   46683 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.053581   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.053802   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:50.520090   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.019821   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.020035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.037008   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.037516   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:56.037585   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.491534   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.989758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.552843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.054370   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.020770   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.520039   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.535930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.536377   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.488491   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.489659   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.552927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.056474   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:01.520560   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.019945   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.536728   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.537724   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.989651   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.989796   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.552707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.553918   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:08.554230   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.520608   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.020075   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:07.036576   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.537071   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.990147   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.489229   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.053576   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:13.054110   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.519744   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.020968   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:12.037949   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.537389   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.989856   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.488429   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.490529   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:15.553553   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.054036   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.519975   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.520288   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:17.036172   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:19.036248   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.036421   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.989943   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.990154   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.553570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.554626   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.020817   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.520602   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.036595   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.038742   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.990299   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:24.994358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.053465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.053635   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.520912   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:28.020413   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.537294   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.489707   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.990957   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.552847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:31.554360   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.052585   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:30.520207   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.521484   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:35.020064   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.035666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.036325   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.535889   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.489468   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.989668   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.556092   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.054617   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:37.519850   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:40.020217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.036499   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.537332   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.992357   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.489925   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.553528   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.052935   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:42.520450   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.520634   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.035299   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.036688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.990255   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.489449   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.553009   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.553560   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:47.018978   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.020289   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.535753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.536227   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.990710   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.490459   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.553710   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.054824   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.520532   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:54.027509   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:52.537108   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.036452   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.989608   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.990105   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.990610   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.552894   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.553520   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:56.519796   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.021401   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.537189   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.537365   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.991065   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.489396   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.053139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.062882   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:01.519625   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:03.520031   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.037036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.988698   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.991107   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.551742   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:06.553955   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.053612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:05.520676   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:08.019671   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:10.021418   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.035613   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.036666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.536861   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.488874   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.490059   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.492236   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.553481   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.054574   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:12.518824   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.519670   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.036399   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.537496   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:13.990228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.488219   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.054609   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.553511   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.519795   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.520535   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:19.037355   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.037964   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.488819   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:20.489536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.053521   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.553922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.021035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.519784   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.535974   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.536845   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:22.988574   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:24.990088   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:26.052017   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.054905   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.520011   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.019323   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.019500   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.537999   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.036187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.488859   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:29.990482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.551701   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.554272   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.019810   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.023728   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.036817   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.042849   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.536415   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.488492   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.491986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:35.053986   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:37.055115   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.520551   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.019307   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:38.537119   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:40.537474   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.991471   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.489241   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.490458   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.552836   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.553914   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:44.052850   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.020033   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.520646   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.036648   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:45.036959   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.990768   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.489482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.053271   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.553811   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.018851   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.021042   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.021254   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:47.536099   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.036995   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.489670   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.990231   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.554677   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.053841   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.520067   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.021727   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.042201   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:54.536260   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.489402   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.492509   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.055031   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.055181   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.521342   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.020905   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.036992   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.037534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:01.538152   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.993709   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.488776   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.555263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.054478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.519672   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:05.020878   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.036330   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.036424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.489742   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.988712   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.555161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.052680   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.055326   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.519641   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.520120   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.536306   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:10.537094   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.988973   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.989715   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.488986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.554973   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.054638   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.019264   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.020253   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.537126   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.037318   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:13.490053   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.988498   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.055193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:18.553665   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.522548   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.020609   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.536765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.037132   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.990230   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.991216   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.555044   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.055590   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:21.520052   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.520574   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:22.038085   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.535549   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.022544   47309 pod_ready.go:81] duration metric: took 4m0.000394525s waiting for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:25.022570   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:25.022598   47309 pod_ready.go:38] duration metric: took 4m12.221722724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:25.022623   47309 kubeadm.go:640] restartCluster took 4m31.561880232s
	W0626 20:51:25.022684   47309 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:25.022722   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:22.489438   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.490731   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.554637   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:27.555070   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.020700   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.520337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.990408   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.990900   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.490197   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:30.053627   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.041205   47605 pod_ready.go:81] duration metric: took 4m0.000945978s waiting for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:31.041235   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:31.041252   47605 pod_ready.go:38] duration metric: took 4m11.097608636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:31.041297   47605 kubeadm.go:640] restartCluster took 4m31.299321581s
	W0626 20:51:31.041365   47605 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:31.041409   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:31.019045   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.022453   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.492871   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.989984   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.520977   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:37.521128   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.021691   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:38.489349   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.989368   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.519812   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:44.520689   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.989461   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:45.491205   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:47.019936   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.506391   47779 pod_ready.go:81] duration metric: took 4m0.001048325s waiting for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:49.506423   47779 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:49.506441   47779 pod_ready.go:38] duration metric: took 4m7.651614118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:49.506483   47779 kubeadm.go:640] restartCluster took 4m26.997522391s
	W0626 20:51:49.506561   47779 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:49.506595   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:47.990134   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.990758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:52.489144   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:54.990008   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:56.650050   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.627303734s)
	I0626 20:51:56.650132   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:51:56.665246   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:51:56.678749   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:51:56.690413   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:51:56.690459   47309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:51:56.757308   47309 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:51:56.757415   47309 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:51:56.915845   47309 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:51:56.916021   47309 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:51:56.916158   47309 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:51:57.137465   47309 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:51:57.139330   47309 out.go:204]   - Generating certificates and keys ...
	I0626 20:51:57.139431   47309 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:51:57.139514   47309 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:51:57.139648   47309 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:51:57.139718   47309 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:51:57.139852   47309 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:51:57.139914   47309 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:51:57.139997   47309 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:51:57.140101   47309 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:51:57.140224   47309 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:51:57.140830   47309 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:51:57.141343   47309 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:51:57.141471   47309 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:51:57.294061   47309 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:51:57.436714   47309 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:51:57.707612   47309 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:51:57.875383   47309 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:51:57.893698   47309 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:51:57.895257   47309 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:51:57.895427   47309 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:51:58.020261   47309 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:51:58.022209   47309 out.go:204]   - Booting up control plane ...
	I0626 20:51:58.022349   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:51:58.023359   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:51:58.024253   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:51:58.026955   47309 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:51:58.032948   47309 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:51:57.489729   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:59.490578   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:01.491617   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:05.539291   47309 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505351 seconds
	I0626 20:52:05.539449   47309 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:05.564127   47309 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:06.097928   47309 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:06.098155   47309 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:06.617147   47309 kubeadm.go:322] [bootstrap-token] Using token: 7fs1fc.9teiyerfkduv7ctw
	I0626 20:52:03.989716   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.489773   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.618462   47309 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:06.618602   47309 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:06.631936   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:06.655354   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:06.662468   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:06.673817   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:06.680979   47309 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:06.717394   47309 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:07.015067   47309 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:07.079315   47309 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:07.079362   47309 kubeadm.go:322] 
	I0626 20:52:07.079450   47309 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:07.079464   47309 kubeadm.go:322] 
	I0626 20:52:07.079544   47309 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:07.079556   47309 kubeadm.go:322] 
	I0626 20:52:07.079597   47309 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:07.079680   47309 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:07.079765   47309 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:07.079782   47309 kubeadm.go:322] 
	I0626 20:52:07.079867   47309 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:07.079880   47309 kubeadm.go:322] 
	I0626 20:52:07.079960   47309 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:07.079971   47309 kubeadm.go:322] 
	I0626 20:52:07.080038   47309 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:07.080123   47309 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:07.080233   47309 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:07.080249   47309 kubeadm.go:322] 
	I0626 20:52:07.080370   47309 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:07.080467   47309 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:07.080481   47309 kubeadm.go:322] 
	I0626 20:52:07.080574   47309 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.080692   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:07.080738   47309 kubeadm.go:322] 	--control-plane 
	I0626 20:52:07.080756   47309 kubeadm.go:322] 
	I0626 20:52:07.080858   47309 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:07.080870   47309 kubeadm.go:322] 
	I0626 20:52:07.080979   47309 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.081124   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:07.082329   47309 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.082353   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:52:07.082369   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:07.084307   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:07.804074   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (36.762635025s)
	I0626 20:52:07.804158   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:07.819772   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:07.830166   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:07.839585   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:07.839633   47605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:08.061341   47605 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.085644   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:07.113105   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:07.158420   47309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:07.158542   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.158590   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=no-preload-934450 minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.637925   47309 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:07.638078   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.262589   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.762326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.262326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.762334   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.262485   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.762376   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:11.262645   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.490810   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:10.990521   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:11.762599   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.262690   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.762512   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.262844   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.762234   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.262587   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.762670   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.262293   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.763106   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:16.263264   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.991151   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:15.489549   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:19.659464   47605 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:19.659534   47605 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:19.659620   47605 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:19.659793   47605 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:19.659913   47605 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:19.659993   47605 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:19.661681   47605 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:19.661770   47605 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:19.661860   47605 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:19.661969   47605 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:19.662065   47605 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:19.662158   47605 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:19.662226   47605 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:19.662321   47605 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:19.662401   47605 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:19.662487   47605 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:19.662595   47605 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:19.662649   47605 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:19.662717   47605 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:19.662779   47605 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:19.662849   47605 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:19.662928   47605 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:19.663014   47605 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:19.663128   47605 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:19.663231   47605 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:19.663286   47605 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:19.663370   47605 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:19.664951   47605 out.go:204]   - Booting up control plane ...
	I0626 20:52:19.665063   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:19.665157   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:19.665246   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:19.665347   47605 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:19.665554   47605 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:19.665662   47605 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504998 seconds
	I0626 20:52:19.665792   47605 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:19.665948   47605 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:19.666027   47605 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:19.666278   47605 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-299839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:19.666360   47605 kubeadm.go:322] [bootstrap-token] Using token: e53kqf.6hnw5p7blg3e1mpb
	I0626 20:52:19.667988   47605 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:19.668104   47605 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:19.668203   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:19.668357   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:19.668500   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:19.668632   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:19.668732   47605 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:19.668890   47605 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:19.668953   47605 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:19.669024   47605 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:19.669042   47605 kubeadm.go:322] 
	I0626 20:52:19.669122   47605 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:19.669135   47605 kubeadm.go:322] 
	I0626 20:52:19.669243   47605 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:19.669253   47605 kubeadm.go:322] 
	I0626 20:52:19.669284   47605 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:19.669392   47605 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:19.669472   47605 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:19.669483   47605 kubeadm.go:322] 
	I0626 20:52:19.669561   47605 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:19.669571   47605 kubeadm.go:322] 
	I0626 20:52:19.669642   47605 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:19.669661   47605 kubeadm.go:322] 
	I0626 20:52:19.669724   47605 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:19.669831   47605 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:19.669941   47605 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:19.669951   47605 kubeadm.go:322] 
	I0626 20:52:19.670055   47605 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:19.670169   47605 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:19.670179   47605 kubeadm.go:322] 
	I0626 20:52:19.670283   47605 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670428   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:19.670469   47605 kubeadm.go:322] 	--control-plane 
	I0626 20:52:19.670484   47605 kubeadm.go:322] 
	I0626 20:52:19.670588   47605 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:19.670603   47605 kubeadm.go:322] 
	I0626 20:52:19.670715   47605 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670850   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:19.670863   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:52:19.670875   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:19.672750   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:16.762961   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.263008   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.762325   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.262618   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.762659   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.262343   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.763023   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.932557   47309 kubeadm.go:1081] duration metric: took 12.774065652s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:19.932647   47309 kubeadm.go:406] StartCluster complete in 5m26.514862376s
	I0626 20:52:19.932687   47309 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.932796   47309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:19.935445   47309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.935820   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:19.936149   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:19.936267   47309 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:19.936369   47309 addons.go:66] Setting storage-provisioner=true in profile "no-preload-934450"
	I0626 20:52:19.936388   47309 addons.go:228] Setting addon storage-provisioner=true in "no-preload-934450"
	W0626 20:52:19.936396   47309 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:19.936453   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.936890   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.936917   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.936996   47309 addons.go:66] Setting default-storageclass=true in profile "no-preload-934450"
	I0626 20:52:19.937022   47309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934450"
	I0626 20:52:19.937178   47309 addons.go:66] Setting metrics-server=true in profile "no-preload-934450"
	I0626 20:52:19.937198   47309 addons.go:228] Setting addon metrics-server=true in "no-preload-934450"
	W0626 20:52:19.937206   47309 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:19.937259   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.937461   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937485   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.937664   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937686   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.956754   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0626 20:52:19.956777   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0626 20:52:19.956923   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0626 20:52:19.957245   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957327   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957473   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957897   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.957918   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958063   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958078   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958217   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958240   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958385   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959001   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.959029   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.959257   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959323   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959523   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.960115   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.960168   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.980739   47309 addons.go:228] Setting addon default-storageclass=true in "no-preload-934450"
	W0626 20:52:19.980887   47309 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:19.980924   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.981308   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.981348   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.982528   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0626 20:52:19.982768   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0626 20:52:19.983398   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984115   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984291   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.984303   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.984767   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985276   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.985294   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.985346   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.985720   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985919   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.987605   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.989810   47309 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:19.991208   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:19.991229   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:19.991248   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:19.989487   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.997528   47309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:19.996110   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:19.996135   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999411   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:19.999436   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999495   47309 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:19.999511   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:19.999532   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.002886   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.003159   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.003321   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.004492   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.004806   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0626 20:52:20.004991   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.005018   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.005189   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.005234   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.005402   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.005568   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.005703   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.005881   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.005899   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.006233   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.006590   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:20.006614   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:20.022796   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0626 20:52:20.023252   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.023827   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.023852   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.024209   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.024425   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:20.026279   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:20.026527   47309 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.026542   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:20.026559   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.029302   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029775   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.029804   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029944   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.030138   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.030321   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.030454   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.331846   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.341298   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:20.352664   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:20.352693   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:20.376961   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:20.420573   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:20.420599   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:20.495388   47309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934450" context rescaled to 1 replicas
	I0626 20:52:20.495436   47309 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:20.497711   47309 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:20.499512   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:20.560528   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:20.560559   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:20.647734   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:21.308936   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.802312904s)
	I0626 20:52:21.309013   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:21.323340   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:21.333741   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:21.346686   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:21.346741   47779 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:21.427299   47779 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:21.427431   47779 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:21.598474   47779 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:21.598609   47779 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:21.598727   47779 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:21.802443   47779 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:17.989506   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:20.002885   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:21.804179   47779 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:21.804277   47779 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:21.804985   47779 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:21.805576   47779 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:21.806465   47779 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:21.807206   47779 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:21.807988   47779 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:21.808775   47779 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:21.809427   47779 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:21.810136   47779 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:21.810809   47779 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:21.811489   47779 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:21.811563   47779 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:22.127084   47779 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:22.371731   47779 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:22.635165   47779 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:22.843347   47779 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:22.866673   47779 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:22.868080   47779 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:22.868259   47779 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:23.015798   47779 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:22.468922   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.137025983s)
	I0626 20:52:22.468974   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.468988   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469285   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469339   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469359   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469390   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469315   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:22.469630   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469649   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469669   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469678   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469900   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469915   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597030   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.255690675s)
	I0626 20:52:23.597078   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.220078989s)
	I0626 20:52:23.597104   47309 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:23.597084   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597131   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597130   47309 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.097584802s)
	I0626 20:52:23.597162   47309 node_ready.go:35] waiting up to 6m0s for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.597463   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597463   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597489   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597499   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597508   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597879   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597931   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597950   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632416   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.984627683s)
	I0626 20:52:23.632472   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632485   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.632907   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.632919   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.632940   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632967   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632982   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.633279   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.633297   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.633307   47309 addons.go:464] Verifying addon metrics-server=true in "no-preload-934450"
	I0626 20:52:23.633353   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.635198   47309 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:19.674407   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:19.702224   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:19.744577   47605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=embed-certs-299839 minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.783628   47605 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:20.149671   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:20.782659   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.283295   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.782574   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.283137   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.782766   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.282641   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.783459   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.017432   47779 out.go:204]   - Booting up control plane ...
	I0626 20:52:23.017573   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:23.019187   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:23.020097   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:23.023559   47779 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:23.025808   47779 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:23.636740   47309 addons.go:499] enable addons completed in 3.700468963s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:23.637657   47309 node_ready.go:49] node "no-preload-934450" has status "Ready":"True"
	I0626 20:52:23.637673   47309 node_ready.go:38] duration metric: took 40.495678ms waiting for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.637684   47309 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:23.676466   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:25.699614   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:22.489080   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.490209   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.282506   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:24.782560   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.282565   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.783022   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.282856   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.783243   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.282657   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.783258   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.282802   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.783019   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.283285   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.782968   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.282489   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.782763   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.283126   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.445729   47605 kubeadm.go:1081] duration metric: took 11.701128618s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:31.445766   47605 kubeadm.go:406] StartCluster complete in 5m31.748710798s
	I0626 20:52:31.445787   47605 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.445873   47605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:31.448427   47605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.448700   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:31.448792   47605 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:31.448866   47605 addons.go:66] Setting storage-provisioner=true in profile "embed-certs-299839"
	I0626 20:52:31.448871   47605 addons.go:66] Setting default-storageclass=true in profile "embed-certs-299839"
	I0626 20:52:31.448884   47605 addons.go:228] Setting addon storage-provisioner=true in "embed-certs-299839"
	I0626 20:52:31.448885   47605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-299839"
	W0626 20:52:31.448892   47605 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:31.448938   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:31.448948   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.448986   47605 addons.go:66] Setting metrics-server=true in profile "embed-certs-299839"
	I0626 20:52:31.449006   47605 addons.go:228] Setting addon metrics-server=true in "embed-certs-299839"
	W0626 20:52:31.449013   47605 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:31.449053   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449762   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450455   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450635   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.450708   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.468787   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0626 20:52:31.469015   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0626 20:52:31.469401   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469497   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469929   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.469947   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470036   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.470073   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470548   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470605   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470723   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0626 20:52:31.470915   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.471202   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.471236   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.471374   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.471846   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.471871   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.481862   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.482471   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.482499   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.492391   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0626 20:52:31.493190   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.493807   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.493833   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.494190   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.494347   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.496376   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.499801   47605 addons.go:228] Setting addon default-storageclass=true in "embed-certs-299839"
	W0626 20:52:31.499822   47605 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:31.499851   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.500224   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.500253   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.506027   47605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:31.507267   47605 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.507286   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:31.507306   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.507954   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0626 20:52:31.508919   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.509350   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.509364   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.509784   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.510070   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.511452   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.513168   47605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:28.196489   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:30.196782   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:26.989644   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:29.488966   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.506536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.511805   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.512430   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.514510   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.514522   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:31.514530   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.514536   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:31.514555   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.514709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.514860   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.515029   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.517249   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517628   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.517653   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517774   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.517948   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.518282   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.518454   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.522114   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0626 20:52:31.522433   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.522982   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.523010   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.523416   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.523984   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.524019   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.545037   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0626 20:52:31.545523   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.546109   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.546140   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.546551   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.546826   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.549289   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.549597   47605 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.549618   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:31.549638   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.553457   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553713   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.553744   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553798   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.553995   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.554131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.554284   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.693230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:31.713818   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.718654   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:31.718682   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:31.734681   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.767394   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:31.767424   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:31.884424   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:31.884443   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:31.961893   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:32.055887   47605 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-299839" context rescaled to 1 replicas
	I0626 20:52:32.055933   47605 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:32.058697   47605 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:32.530480   47779 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.504525 seconds
	I0626 20:52:32.530633   47779 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:32.556112   47779 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:33.096104   47779 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:33.096372   47779 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-473235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:33.615425   47779 kubeadm.go:322] [bootstrap-token] Using token: fvy9dh.hbeabw0ufqdnf2rd
	I0626 20:52:33.617480   47779 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:33.617622   47779 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:33.630158   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:33.641973   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:33.649480   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:33.657736   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:33.663093   47779 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:33.698108   47779 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:34.017843   47779 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:34.069498   47779 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:34.070500   47779 kubeadm.go:322] 
	I0626 20:52:34.070587   47779 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:34.070600   47779 kubeadm.go:322] 
	I0626 20:52:34.070691   47779 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:34.070705   47779 kubeadm.go:322] 
	I0626 20:52:34.070734   47779 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:34.070809   47779 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:34.070915   47779 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:34.070952   47779 kubeadm.go:322] 
	I0626 20:52:34.071047   47779 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:34.071060   47779 kubeadm.go:322] 
	I0626 20:52:34.071114   47779 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:34.071124   47779 kubeadm.go:322] 
	I0626 20:52:34.071183   47779 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:34.071276   47779 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:34.071360   47779 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:34.071369   47779 kubeadm.go:322] 
	I0626 20:52:34.071454   47779 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:34.071550   47779 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:34.071558   47779 kubeadm.go:322] 
	I0626 20:52:34.071677   47779 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.071823   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:34.071852   47779 kubeadm.go:322] 	--control-plane 
	I0626 20:52:34.071860   47779 kubeadm.go:322] 
	I0626 20:52:34.071961   47779 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:34.071973   47779 kubeadm.go:322] 
	I0626 20:52:34.072075   47779 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.072202   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:34.072734   47779 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:34.072775   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:52:34.072794   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:34.074659   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:32.060653   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:33.969636   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.276366101s)
	I0626 20:52:33.969679   47605 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:34.114443   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.400580422s)
	I0626 20:52:34.114587   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114636   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114483   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.379765696s)
	I0626 20:52:34.114695   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114993   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115036   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115049   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.115059   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.115068   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.115386   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115394   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115458   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117682   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.117720   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.117736   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117754   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.117764   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.119184   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.119204   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.119218   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.119238   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.119253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.120750   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.120787   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.120800   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.800635   47605 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.739945617s)
	I0626 20:52:34.800672   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838732117s)
	I0626 20:52:34.800721   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.800740   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.800674   47605 node_ready.go:35] waiting up to 6m0s for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.801059   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.801086   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.801103   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.801112   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.802733   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.802767   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.802781   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.802798   47605 addons.go:464] Verifying addon metrics-server=true in "embed-certs-299839"
	I0626 20:52:34.804616   47605 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:52:34.076233   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:34.097578   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:34.126294   47779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:34.126351   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.126361   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=default-k8s-diff-port-473235 minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.672738   47779 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:34.672886   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:32.196979   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.198202   47309 pod_ready.go:97] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198243   47309 pod_ready.go:81] duration metric: took 10.521748073s waiting for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:34.198256   47309 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198265   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208718   47309 pod_ready.go:92] pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.208751   47309 pod_ready.go:81] duration metric: took 10.474456ms waiting for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208765   47309 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216757   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.216787   47309 pod_ready.go:81] duration metric: took 8.014039ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216800   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226840   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.226862   47309 pod_ready.go:81] duration metric: took 10.054474ms waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226875   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234229   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.234252   47309 pod_ready.go:81] duration metric: took 7.369366ms waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234265   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603958   47309 pod_ready.go:92] pod "kube-proxy-jhz99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.603985   47309 pod_ready.go:81] duration metric: took 369.712585ms waiting for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603999   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.992990   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.993018   47309 pod_ready.go:81] duration metric: took 389.011206ms waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.993033   47309 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:33.991358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:36.489561   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.806005   47605 addons.go:499] enable addons completed in 3.357208024s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:52:34.826098   47605 node_ready.go:49] node "embed-certs-299839" has status "Ready":"True"
	I0626 20:52:34.826123   47605 node_ready.go:38] duration metric: took 25.328707ms waiting for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.826131   47605 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:34.853293   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388894   47605 pod_ready.go:92] pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.388921   47605 pod_ready.go:81] duration metric: took 1.535604079s waiting for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388931   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397936   47605 pod_ready.go:92] pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.397962   47605 pod_ready.go:81] duration metric: took 9.024703ms waiting for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397978   47605 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409066   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.409098   47605 pod_ready.go:81] duration metric: took 11.112746ms waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409111   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419292   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.419313   47605 pod_ready.go:81] duration metric: took 10.193966ms waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419322   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429116   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.429140   47605 pod_ready.go:81] duration metric: took 9.812044ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429154   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316268   47605 pod_ready.go:92] pod "kube-proxy-scfwr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.316318   47605 pod_ready.go:81] duration metric: took 887.155494ms waiting for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316334   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605351   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.605394   47605 pod_ready.go:81] duration metric: took 289.052198ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605409   47605 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:35.287764   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:35.787902   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.287089   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.786922   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.287932   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.787255   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.287820   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.786891   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.287467   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.787282   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.400022   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:39.401566   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:41.404969   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:38.491696   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.990293   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.013927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:42.518436   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.287734   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:40.786949   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.287187   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.787722   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.287098   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.787623   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.287242   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.787224   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.287339   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.787760   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.287273   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.787052   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.287810   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.436665   47779 kubeadm.go:1081] duration metric: took 12.310369141s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:46.436696   47779 kubeadm.go:406] StartCluster complete in 5m23.972219662s
	I0626 20:52:46.436715   47779 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.436798   47779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:46.438623   47779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.438897   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:46.439016   47779 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:46.439110   47779 addons.go:66] Setting storage-provisioner=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439117   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:46.439128   47779 addons.go:66] Setting default-storageclass=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439166   47779 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-473235"
	I0626 20:52:46.439128   47779 addons.go:228] Setting addon storage-provisioner=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439240   47779 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:46.439285   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439133   47779 addons.go:66] Setting metrics-server=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439336   47779 addons.go:228] Setting addon metrics-server=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439346   47779 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:46.439383   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439663   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439691   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439694   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439717   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439733   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439754   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.456038   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0626 20:52:46.456227   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0626 20:52:46.456533   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.456769   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.457072   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457092   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457258   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457280   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457413   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457749   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457902   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.459751   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0626 20:52:46.460296   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.460326   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.460688   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.462951   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.462975   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.463384   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.463981   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.464006   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.477368   47779 addons.go:228] Setting addon default-storageclass=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.477472   47779 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:46.477516   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.477987   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.478063   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.479865   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0626 20:52:46.480358   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.480932   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.480951   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.481335   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.482608   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0626 20:52:46.482630   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.482982   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.483505   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.483521   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.483907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.484123   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.485234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.487634   47779 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:46.486430   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.488916   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:46.488938   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:46.488959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.490698   47779 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:43.900514   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.900540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:43.488701   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.992735   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:46.491860   47779 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.491875   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:46.491893   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.492950   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.493834   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.493855   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.494361   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.494827   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.494987   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.495130   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.496109   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.496170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496192   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.496213   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496294   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.496444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.496549   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.502119   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0626 20:52:46.502456   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.502898   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.502916   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.503203   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.503723   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.503747   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.522597   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0626 20:52:46.523240   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.523892   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.523912   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.524423   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.524674   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.526567   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.528682   47779 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.528699   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:46.528721   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.531983   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532450   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.532477   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532785   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.533992   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.534158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.534302   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.698636   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:46.819666   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.915074   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.918133   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:46.918161   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:47.006856   47779 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-473235" context rescaled to 1 replicas
	I0626 20:52:47.006907   47779 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:47.008746   47779 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:45.013051   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.014722   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.010273   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:47.015003   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:47.015022   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:47.099554   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:47.099583   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:47.162192   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:48.848078   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.149396252s)
	I0626 20:52:48.848110   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.028412306s)
	I0626 20:52:48.848145   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848157   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848112   47779 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:48.848418   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848438   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848440   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848448   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848460   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848678   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848699   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848712   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848715   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848722   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848936   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848959   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.142482   47779 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.13217662s)
	I0626 20:52:49.142522   47779 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.142664   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.227563186s)
	I0626 20:52:49.142706   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.142723   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143018   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143037   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143047   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.143055   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.143309   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143402   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143378   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.230635   47779 node_ready.go:49] node "default-k8s-diff-port-473235" has status "Ready":"True"
	I0626 20:52:49.230663   47779 node_ready.go:38] duration metric: took 88.12938ms waiting for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.230688   47779 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:49.248094   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:49.857182   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694948259s)
	I0626 20:52:49.857243   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857254   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857552   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857569   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857579   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857588   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857816   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857836   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857847   47779 addons.go:464] Verifying addon metrics-server=true in "default-k8s-diff-port-473235"
	I0626 20:52:49.859648   47779 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:49.860902   47779 addons.go:499] enable addons completed in 3.421885216s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:47.901422   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.402347   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:48.490248   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.991228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.082154   46683 pod_ready.go:81] duration metric: took 4m0.000473504s waiting for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:51.082180   46683 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:52:51.082198   46683 pod_ready.go:38] duration metric: took 4m1.199581008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:51.082227   46683 kubeadm.go:640] restartCluster took 5m4.421255564s
	W0626 20:52:51.082286   46683 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:52:51.082319   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:52:50.897742   47779 pod_ready.go:92] pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.897765   47779 pod_ready.go:81] duration metric: took 1.649649958s waiting for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.897777   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.924988   47779 pod_ready.go:92] pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.925007   47779 pod_ready.go:81] duration metric: took 27.222965ms waiting for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.925016   47779 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942760   47779 pod_ready.go:92] pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.942781   47779 pod_ready.go:81] duration metric: took 17.75819ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942790   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956204   47779 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.956224   47779 pod_ready.go:81] duration metric: took 13.428405ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956235   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964542   47779 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.964569   47779 pod_ready.go:81] duration metric: took 8.32705ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964581   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791355   47779 pod_ready.go:92] pod "kube-proxy-k4hzc" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:51.791376   47779 pod_ready.go:81] duration metric: took 826.787812ms waiting for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791384   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078670   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:52.078700   47779 pod_ready.go:81] duration metric: took 287.306474ms waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078714   47779 pod_ready.go:38] duration metric: took 2.848014299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:52.078733   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:52:52.078789   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:52:52.094414   47779 api_server.go:72] duration metric: took 5.08747775s to wait for apiserver process to appear ...
	I0626 20:52:52.094444   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:52:52.094468   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:52:52.101300   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:52:52.102682   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:52:52.102703   47779 api_server.go:131] duration metric: took 8.250707ms to wait for apiserver health ...
	I0626 20:52:52.102712   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:52:52.283428   47779 system_pods.go:59] 9 kube-system pods found
	I0626 20:52:52.283459   47779 system_pods.go:61] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.283467   47779 system_pods.go:61] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.283474   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.283482   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.283488   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.283493   47779 system_pods.go:61] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.283500   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.283511   47779 system_pods.go:61] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.283519   47779 system_pods.go:61] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.283527   47779 system_pods.go:74] duration metric: took 180.810034ms to wait for pod list to return data ...
	I0626 20:52:52.283540   47779 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:52:52.478374   47779 default_sa.go:45] found service account: "default"
	I0626 20:52:52.478400   47779 default_sa.go:55] duration metric: took 194.853163ms for default service account to be created ...
	I0626 20:52:52.478418   47779 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:52:52.683697   47779 system_pods.go:86] 9 kube-system pods found
	I0626 20:52:52.683724   47779 system_pods.go:89] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.683730   47779 system_pods.go:89] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.683735   47779 system_pods.go:89] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.683740   47779 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.683745   47779 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.683748   47779 system_pods.go:89] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.683752   47779 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.683761   47779 system_pods.go:89] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.683773   47779 system_pods.go:89] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.683789   47779 system_pods.go:126] duration metric: took 205.364587ms to wait for k8s-apps to be running ...
	I0626 20:52:52.683798   47779 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:52:52.683846   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:52.698439   47779 system_svc.go:56] duration metric: took 14.634482ms WaitForService to wait for kubelet.
	I0626 20:52:52.698463   47779 kubeadm.go:581] duration metric: took 5.691531199s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:52:52.698480   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:52:52.879414   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:52:52.879441   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:52:52.879454   47779 node_conditions.go:105] duration metric: took 180.969761ms to run NodePressure ...
	I0626 20:52:52.879466   47779 start.go:228] waiting for startup goroutines ...
	I0626 20:52:52.879473   47779 start.go:233] waiting for cluster config update ...
	I0626 20:52:52.879484   47779 start.go:242] writing updated cluster config ...
	I0626 20:52:52.879748   47779 ssh_runner.go:195] Run: rm -f paused
	I0626 20:52:52.928982   47779 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:52:52.930701   47779 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-473235" cluster and "default" namespace by default
	I0626 20:52:49.513843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.515851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:54.013443   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:52.901965   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:55.400541   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:56.014186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:58.516445   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:57.900857   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:59.901944   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:01.013089   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:03.015510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:02.400534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:04.400691   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:06.401897   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:05.513529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.013510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.901751   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:11.400891   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:10.513562   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:12.515529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:13.900503   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:15.900570   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:14.208647   46683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.126299276s)
	I0626 20:53:14.208727   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:14.222919   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:53:14.234762   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:53:14.244800   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:53:14.244840   46683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0626 20:53:14.465786   46683 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:53:15.014781   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.017400   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.901367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:20.401697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:19.515459   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.015763   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.900407   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:24.901270   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.255771   46683 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0626 20:53:27.255867   46683 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:53:27.255968   46683 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:53:27.256115   46683 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:53:27.256237   46683 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:53:27.256368   46683 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:53:27.256489   46683 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:53:27.256550   46683 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0626 20:53:27.256604   46683 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:53:27.258050   46683 out.go:204]   - Generating certificates and keys ...
	I0626 20:53:27.258140   46683 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:53:27.258235   46683 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:53:27.258357   46683 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:53:27.258441   46683 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:53:27.258554   46683 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:53:27.258611   46683 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:53:27.258665   46683 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:53:27.258737   46683 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:53:27.258832   46683 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:53:27.258907   46683 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:53:27.258954   46683 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:53:27.259034   46683 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:53:27.259106   46683 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:53:27.259170   46683 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:53:27.259247   46683 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:53:27.259325   46683 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:53:27.259410   46683 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:53:27.260969   46683 out.go:204]   - Booting up control plane ...
	I0626 20:53:27.261074   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:53:27.261181   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:53:27.261257   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:53:27.261341   46683 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:53:27.261496   46683 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:53:27.261599   46683 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003012 seconds
	I0626 20:53:27.261709   46683 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:53:27.261854   46683 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:53:27.261940   46683 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:53:27.262112   46683 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-490377 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 20:53:27.262210   46683 kubeadm.go:322] [bootstrap-token] Using token: 9pdj92.0ssfpvr0ns0ww3t3
	I0626 20:53:27.263670   46683 out.go:204]   - Configuring RBAC rules ...
	I0626 20:53:27.263769   46683 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:53:27.263903   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:53:27.264029   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:53:27.264172   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:53:27.264278   46683 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:53:27.264333   46683 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:53:27.264372   46683 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:53:27.264379   46683 kubeadm.go:322] 
	I0626 20:53:27.264445   46683 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:53:27.264454   46683 kubeadm.go:322] 
	I0626 20:53:27.264557   46683 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:53:27.264568   46683 kubeadm.go:322] 
	I0626 20:53:27.264598   46683 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:53:27.264668   46683 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:53:27.264715   46683 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:53:27.264721   46683 kubeadm.go:322] 
	I0626 20:53:27.264769   46683 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:53:27.264846   46683 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:53:27.264934   46683 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:53:27.264943   46683 kubeadm.go:322] 
	I0626 20:53:27.265038   46683 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0626 20:53:27.265101   46683 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:53:27.265107   46683 kubeadm.go:322] 
	I0626 20:53:27.265171   46683 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265269   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:53:27.265292   46683 kubeadm.go:322]     --control-plane 	  
	I0626 20:53:27.265298   46683 kubeadm.go:322] 
	I0626 20:53:27.265439   46683 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:53:27.265451   46683 kubeadm.go:322] 
	I0626 20:53:27.265581   46683 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265739   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:53:27.265753   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:53:27.265765   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:53:27.267293   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:53:24.515093   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.014403   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.401630   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:29.404203   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.268439   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:53:27.281135   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:53:27.304145   46683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:53:27.304275   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=old-k8s-version-490377 minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.304277   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.555789   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.571040   46683 ops.go:34] apiserver oom_adj: -16
	I0626 20:53:28.180843   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:28.681089   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.180441   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.680355   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.180860   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.680971   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.181088   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.680352   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.516069   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.013135   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.013391   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:31.901777   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.400314   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:36.400967   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.180338   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:32.680389   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.180568   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.681010   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.180575   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.680905   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.180640   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.680412   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.181081   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.680836   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.514263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:39.013193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:38.900309   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:40.900622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:37.181178   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:37.680710   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.180280   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.680304   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.681177   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.180431   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.681031   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.180847   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.681058   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.680883   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.800538   46683 kubeadm.go:1081] duration metric: took 15.496322508s to wait for elevateKubeSystemPrivileges.
	I0626 20:53:42.800568   46683 kubeadm.go:406] StartCluster complete in 5m56.189450192s
	I0626 20:53:42.800584   46683 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.800661   46683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:53:42.802530   46683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.802755   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:53:42.802810   46683 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:53:42.802908   46683 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802926   46683 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-490377"
	W0626 20:53:42.802936   46683 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:53:42.802934   46683 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802953   46683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-490377"
	I0626 20:53:42.802972   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:53:42.802983   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.802974   46683 addons.go:66] Setting metrics-server=true in profile "old-k8s-version-490377"
	I0626 20:53:42.803034   46683 addons.go:228] Setting addon metrics-server=true in "old-k8s-version-490377"
	W0626 20:53:42.803048   46683 addons.go:237] addon metrics-server should already be in state true
	I0626 20:53:42.803155   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.803353   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803394   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803437   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803468   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803563   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803607   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.822676   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0626 20:53:42.822891   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0626 20:53:42.823127   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823221   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0626 20:53:42.823284   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823599   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823763   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823771   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823783   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.823790   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824056   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.824082   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824096   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824141   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824310   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.824408   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824656   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824682   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.824924   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824954   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.839635   46683 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-490377"
	W0626 20:53:42.839655   46683 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:53:42.839675   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.840131   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.840171   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.846479   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0626 20:53:42.847180   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.847711   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.847728   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.848194   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.848454   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.848519   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0626 20:53:42.850321   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.850427   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.852331   46683 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:53:42.851252   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.853522   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.853581   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:53:42.853603   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:53:42.853625   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.854082   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.854292   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.856641   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.858158   46683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:53:42.857809   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.859467   46683 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:42.859485   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:53:42.859500   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.859505   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.859528   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.858223   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.858466   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0626 20:53:42.860179   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.860331   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.860421   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.860783   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.860909   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.860923   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.861642   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.862199   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.862246   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.863700   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864103   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.864124   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864413   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.864598   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.864737   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.864867   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.878470   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0626 20:53:42.878961   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.879500   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.879510   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.879860   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.880063   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.881757   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.882028   46683 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:42.882040   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:53:42.882054   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.887689   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.887749   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.887779   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887888   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.888058   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.888203   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.981495   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:53:43.064530   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:53:43.064554   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:53:43.074105   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:43.091876   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:43.132074   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:53:43.132095   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:53:43.219103   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.219133   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:53:43.285081   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.443796   46683 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-490377" context rescaled to 1 replicas
	I0626 20:53:43.443841   46683 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:53:43.445639   46683 out.go:177] * Verifying Kubernetes components...
	I0626 20:53:41.014279   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.515278   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.447458   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:43.642242   46683 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0626 20:53:44.194901   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.102988033s)
	I0626 20:53:44.194990   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195008   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.194932   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120793889s)
	I0626 20:53:44.195085   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195096   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195452   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195466   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195475   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195486   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195493   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195518   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195529   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195714   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195765   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195774   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195816   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195893   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195905   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195922   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195936   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.196171   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.196190   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.196197   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.260680   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.260703   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.260706   46683 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.261103   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261122   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261134   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.261144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.261146   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.261364   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261386   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261396   46683 addons.go:464] Verifying addon metrics-server=true in "old-k8s-version-490377"
	I0626 20:53:44.262936   46683 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:53:42.901604   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.902182   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.264049   46683 addons.go:499] enable addons completed in 1.461244367s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:53:44.318103   46683 node_ready.go:49] node "old-k8s-version-490377" has status "Ready":"True"
	I0626 20:53:44.318135   46683 node_ready.go:38] duration metric: took 57.40895ms waiting for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.318147   46683 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:44.333409   46683 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:46.345926   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:46.015128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.516066   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:47.400802   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:49.901066   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.347529   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:50.847639   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:51.012404   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.012697   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:52.400326   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:54.400932   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.402434   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.345907   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:55.345824   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.345850   46683 pod_ready.go:81] duration metric: took 11.012408828s waiting for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.345858   46683 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350198   46683 pod_ready.go:92] pod "kube-proxy-m7hz7" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.350214   46683 pod_ready.go:81] duration metric: took 4.351274ms waiting for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350222   46683 pod_ready.go:38] duration metric: took 11.032065043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:55.350236   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:53:55.350285   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:53:55.366478   46683 api_server.go:72] duration metric: took 11.922600619s to wait for apiserver process to appear ...
	I0626 20:53:55.366501   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:53:55.366518   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:53:55.373257   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:53:55.374362   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:53:55.374382   46683 api_server.go:131] duration metric: took 7.874169ms to wait for apiserver health ...
	I0626 20:53:55.374390   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:53:55.377704   46683 system_pods.go:59] 4 kube-system pods found
	I0626 20:53:55.377719   46683 system_pods.go:61] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.377724   46683 system_pods.go:61] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.377744   46683 system_pods.go:61] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.377754   46683 system_pods.go:61] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.377759   46683 system_pods.go:74] duration metric: took 3.35753ms to wait for pod list to return data ...
	I0626 20:53:55.377765   46683 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:53:55.379628   46683 default_sa.go:45] found service account: "default"
	I0626 20:53:55.379641   46683 default_sa.go:55] duration metric: took 1.87263ms for default service account to be created ...
	I0626 20:53:55.379647   46683 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:53:55.382155   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.382171   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.382176   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.382183   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.382189   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.382204   46683 retry.go:31] will retry after 310.903974ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.698587   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.698613   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.698618   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.698625   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.698631   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.698646   46683 retry.go:31] will retry after 300.100433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.005356   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.005397   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.005408   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.005419   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.005427   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.005446   46683 retry.go:31] will retry after 407.352435ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.417879   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.417905   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.417910   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.417916   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.417922   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.417935   46683 retry.go:31] will retry after 483.508514ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.013247   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:57.015631   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:58.900650   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.401491   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.906260   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.906282   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.906287   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.906293   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.906301   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.906319   46683 retry.go:31] will retry after 527.167542ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:57.438949   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:57.438985   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:57.438995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:57.439006   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:57.439019   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:57.439038   46683 retry.go:31] will retry after 902.255612ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:58.346131   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:58.346161   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:58.346166   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:58.346173   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:58.346179   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:58.346192   46683 retry.go:31] will retry after 904.271086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.256458   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:59.256489   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:59.256497   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:59.256509   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:59.256517   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:59.256534   46683 retry.go:31] will retry after 1.069634228s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:00.331828   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:00.331858   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:00.331865   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:00.331873   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:00.331879   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:00.331896   46683 retry.go:31] will retry after 1.418598639s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:01.755104   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:01.755131   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:01.755136   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:01.755143   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:01.755149   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:01.755162   46683 retry.go:31] will retry after 1.624135654s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.514847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.515086   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.900425   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:05.900854   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.385085   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:03.385111   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:03.385116   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:03.385122   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:03.385128   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:03.385142   46683 retry.go:31] will retry after 1.861818901s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:05.251844   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:05.251870   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:05.251875   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:05.251882   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:05.251888   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:05.251901   46683 retry.go:31] will retry after 3.23679019s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:06.013786   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.514493   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.399542   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:10.400928   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.494644   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:08.494669   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:08.494674   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:08.494681   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:08.494687   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:08.494700   46683 retry.go:31] will retry after 4.210335189s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:10.514707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.515079   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.415273   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:14.899807   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.709730   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:12.709754   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:12.709759   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:12.709765   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:12.709771   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:12.709785   46683 retry.go:31] will retry after 4.208864521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:15.012766   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:17.012807   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.014851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.901107   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.400540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:21.402204   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.923625   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:16.923654   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:16.923662   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:16.923673   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:16.923682   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:16.923701   46683 retry.go:31] will retry after 6.417296046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:21.514829   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.515117   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.402546   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:25.903195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.347074   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:23.347099   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:23.347105   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Pending
	I0626 20:54:23.347108   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:23.347115   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:23.347121   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:23.347133   46683 retry.go:31] will retry after 7.108155838s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:26.013263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.013708   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.399697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.401036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.460927   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:30.460950   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:30.460955   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:30.460995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:30.461004   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:30.461014   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:30.461027   46683 retry.go:31] will retry after 9.756193162s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:30.514139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.514334   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:34.901064   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:35.013362   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.013815   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.014126   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.400945   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:40.223985   46683 system_pods.go:86] 7 kube-system pods found
	I0626 20:54:40.224009   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:40.224014   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Pending
	I0626 20:54:40.224018   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Pending
	I0626 20:54:40.224022   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:40.224026   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:40.224032   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:40.224037   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:40.224052   46683 retry.go:31] will retry after 8.963386657s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:41.515388   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:44.015053   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:41.900424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:43.901263   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.400098   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.514128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.013743   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.195390   46683 system_pods.go:86] 8 kube-system pods found
	I0626 20:54:49.195416   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:49.195421   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Running
	I0626 20:54:49.195426   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Running
	I0626 20:54:49.195430   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:49.195434   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:49.195438   46683 system_pods.go:89] "kube-scheduler-old-k8s-version-490377" [c6fe04b8-d037-452b-bf41-3719c032b7ef] Running
	I0626 20:54:49.195444   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:49.195450   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:49.195458   46683 system_pods.go:126] duration metric: took 53.81580645s to wait for k8s-apps to be running ...
	I0626 20:54:49.195466   46683 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:54:49.195518   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:54:49.219014   46683 system_svc.go:56] duration metric: took 23.534309ms WaitForService to wait for kubelet.
	I0626 20:54:49.219049   46683 kubeadm.go:581] duration metric: took 1m5.775176119s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:54:49.219089   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:54:49.223397   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:54:49.223426   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:54:49.223438   46683 node_conditions.go:105] duration metric: took 4.339435ms to run NodePressure ...
	I0626 20:54:49.223452   46683 start.go:228] waiting for startup goroutines ...
	I0626 20:54:49.223461   46683 start.go:233] waiting for cluster config update ...
	I0626 20:54:49.223472   46683 start.go:242] writing updated cluster config ...
	I0626 20:54:49.223798   46683 ssh_runner.go:195] Run: rm -f paused
	I0626 20:54:49.277613   46683 start.go:652] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0626 20:54:49.279501   46683 out.go:177] 
	W0626 20:54:49.280841   46683 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0626 20:54:49.282249   46683 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0626 20:54:49.283695   46683 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-490377" cluster and "default" namespace by default
	I0626 20:54:48.401602   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:50.900375   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:51.514071   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.013330   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:52.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.900946   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.013501   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:58.014748   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.901531   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:59.401822   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:00.016725   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:02.514316   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:01.902698   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:04.400011   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:06.402149   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:05.014536   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:07.514975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:08.900297   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.900463   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.013780   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:12.514823   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:13.399907   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.400044   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.014032   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.515161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.907245   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.400962   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.015074   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.514465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.403366   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.900247   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.514993   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.012592   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.013612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.400192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.401917   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.402240   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.015647   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.513844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.900187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.902063   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.514657   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:37.514888   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:38.400753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.902398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.014755   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:42.514599   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:43.401280   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:45.902265   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:44.521736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.016422   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.902334   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:50.400765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:49.515570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.014736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.900293   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.900572   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.514047   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.013346   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.013409   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.400170   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.401528   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.013946   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:03.014845   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.902597   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:04.401919   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:05.514639   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:08.016797   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:06.901493   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:09.400229   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:11.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:10.513478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:12.514938   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:13.403138   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.901738   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.013852   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:17.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:18.400812   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.401025   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.013522   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.015651   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.016747   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.401212   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.401675   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.515343   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:28.515706   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.902301   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:29.401779   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.012844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:33.013826   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.901622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.403688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.993256   47309 pod_ready.go:81] duration metric: took 4m0.000204736s waiting for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:34.993309   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:34.993324   47309 pod_ready.go:38] duration metric: took 4m11.355630262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:34.993352   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:34.993410   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:34.993484   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:35.038316   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.038342   47309 cri.go:89] found id: ""
	I0626 20:56:35.038352   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:35.038414   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.042851   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:35.042914   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:35.076892   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.076925   47309 cri.go:89] found id: ""
	I0626 20:56:35.076934   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:35.076990   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.081850   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:35.081933   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:35.119872   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.119896   47309 cri.go:89] found id: ""
	I0626 20:56:35.119904   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:35.119971   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.124661   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:35.124731   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:35.158899   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.158924   47309 cri.go:89] found id: ""
	I0626 20:56:35.158933   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:35.158991   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.163512   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:35.163587   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:35.195698   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.195721   47309 cri.go:89] found id: ""
	I0626 20:56:35.195729   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:35.195786   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.199883   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:35.199935   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:35.243909   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.243932   47309 cri.go:89] found id: ""
	I0626 20:56:35.243939   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:35.243992   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.248331   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:35.248388   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:35.287985   47309 cri.go:89] found id: ""
	I0626 20:56:35.288009   47309 logs.go:284] 0 containers: []
	W0626 20:56:35.288019   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:35.288026   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:35.288085   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:35.324050   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.324129   47309 cri.go:89] found id: ""
	I0626 20:56:35.324151   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:35.324219   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.328564   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:35.328588   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:35.369968   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:35.369997   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:35.391588   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:35.391615   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:35.542328   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:35.542356   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.579140   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:35.579172   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.635428   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:35.635463   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.674715   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:35.674750   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.732788   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:35.732837   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.774860   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:35.774901   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:35.881082   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:35.881118   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.929445   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:35.929478   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.968723   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:35.968754   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:35.015798   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.514548   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.606375   47605 pod_ready.go:81] duration metric: took 4m0.000950536s waiting for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:37.606403   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:37.606412   47605 pod_ready.go:38] duration metric: took 4m2.78027212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:37.606429   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:37.606459   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:37.606521   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:37.668350   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:37.668383   47605 cri.go:89] found id: ""
	I0626 20:56:37.668391   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:37.668453   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.675583   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:37.675669   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:37.710826   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:37.710852   47605 cri.go:89] found id: ""
	I0626 20:56:37.710860   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:37.710916   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.715610   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:37.715671   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:37.751709   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:37.751784   47605 cri.go:89] found id: ""
	I0626 20:56:37.751812   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:37.751877   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.757177   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:37.757241   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:37.790384   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:37.790413   47605 cri.go:89] found id: ""
	I0626 20:56:37.790420   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:37.790468   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.795294   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:37.795352   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:37.832125   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:37.832157   47605 cri.go:89] found id: ""
	I0626 20:56:37.832168   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:37.832239   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.836762   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:37.836816   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:37.877789   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:37.877817   47605 cri.go:89] found id: ""
	I0626 20:56:37.877827   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:37.877887   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.885276   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:37.885348   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:37.929701   47605 cri.go:89] found id: ""
	I0626 20:56:37.929731   47605 logs.go:284] 0 containers: []
	W0626 20:56:37.929745   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:37.929755   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:37.929815   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:37.970177   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:37.970201   47605 cri.go:89] found id: ""
	I0626 20:56:37.970211   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:37.970270   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.975002   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:37.975025   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:38.022831   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:38.022862   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:38.058414   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:38.058446   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:38.168689   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:38.168726   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:38.183930   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:38.183959   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:38.224623   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:38.224653   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:38.271164   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:38.271205   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:38.308365   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:38.308391   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:38.363321   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:38.363356   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:38.510275   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:38.510310   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:38.552512   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:38.552544   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:38.586122   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:38.586155   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:38.945144   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:38.962999   47309 api_server.go:72] duration metric: took 4m18.467522928s to wait for apiserver process to appear ...
	I0626 20:56:38.963026   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:38.963067   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:38.963129   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:39.002109   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.002133   47309 cri.go:89] found id: ""
	I0626 20:56:39.002141   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:39.002198   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.006799   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:39.006864   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:39.042531   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:39.042556   47309 cri.go:89] found id: ""
	I0626 20:56:39.042566   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:39.042621   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.047228   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:39.047301   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:39.080810   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.080842   47309 cri.go:89] found id: ""
	I0626 20:56:39.080850   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:39.080916   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.085173   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:39.085238   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:39.116857   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:39.116886   47309 cri.go:89] found id: ""
	I0626 20:56:39.116895   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:39.116946   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.121912   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:39.122007   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:39.166886   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.166912   47309 cri.go:89] found id: ""
	I0626 20:56:39.166920   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:39.166972   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.171344   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:39.171420   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:39.205333   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:39.205358   47309 cri.go:89] found id: ""
	I0626 20:56:39.205366   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:39.205445   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.211414   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:39.211491   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:39.249068   47309 cri.go:89] found id: ""
	I0626 20:56:39.249092   47309 logs.go:284] 0 containers: []
	W0626 20:56:39.249103   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:39.249110   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:39.249171   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:39.283295   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.283314   47309 cri.go:89] found id: ""
	I0626 20:56:39.283325   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:39.283372   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.287514   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:39.287537   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:39.420720   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:39.420752   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.479018   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:39.479052   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.512285   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:39.512313   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.549886   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:39.549922   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.590619   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:39.590647   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:40.076597   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:40.076642   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:40.092551   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:40.092581   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:40.135655   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:40.135699   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:40.184590   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:40.184628   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:40.238354   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:40.238393   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:40.283033   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:40.283075   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:41.567686   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:41.584431   47605 api_server.go:72] duration metric: took 4m9.528462616s to wait for apiserver process to appear ...
	I0626 20:56:41.584462   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:41.584492   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:41.584553   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:41.622027   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:41.622051   47605 cri.go:89] found id: ""
	I0626 20:56:41.622061   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:41.622119   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.626209   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:41.626271   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:41.658658   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:41.658680   47605 cri.go:89] found id: ""
	I0626 20:56:41.658689   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:41.658746   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.666357   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:41.666437   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:41.702344   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:41.702369   47605 cri.go:89] found id: ""
	I0626 20:56:41.702378   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:41.702443   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.706706   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:41.706775   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:41.743534   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:41.743554   47605 cri.go:89] found id: ""
	I0626 20:56:41.743561   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:41.743619   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.748338   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:41.748408   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:41.780299   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:41.780324   47605 cri.go:89] found id: ""
	I0626 20:56:41.780333   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:41.780392   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.785308   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:41.785395   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:41.819335   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:41.819361   47605 cri.go:89] found id: ""
	I0626 20:56:41.819370   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:41.819415   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.823767   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:41.823832   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:41.855049   47605 cri.go:89] found id: ""
	I0626 20:56:41.855079   47605 logs.go:284] 0 containers: []
	W0626 20:56:41.855088   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:41.855094   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:41.855147   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:41.886378   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:41.886400   47605 cri.go:89] found id: ""
	I0626 20:56:41.886408   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:41.886459   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.891748   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:41.891777   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:42.003933   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:42.003968   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:42.018182   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:42.018230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:42.145038   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:42.145074   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:42.181403   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:42.181438   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:42.224428   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:42.224467   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:42.260067   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:42.260097   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:42.312924   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:42.312972   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:42.347173   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:42.347203   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:42.920689   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:42.920725   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:42.970428   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:42.970456   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:43.021561   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.021587   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:42.886551   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:56:42.892462   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:56:42.894253   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:42.894277   47309 api_server.go:131] duration metric: took 3.931242905s to wait for apiserver health ...
	I0626 20:56:42.894286   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:42.894309   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:42.894364   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:42.931699   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:42.931728   47309 cri.go:89] found id: ""
	I0626 20:56:42.931736   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:42.931792   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.936873   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:42.936944   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:42.968701   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:42.968720   47309 cri.go:89] found id: ""
	I0626 20:56:42.968727   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:42.968778   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.974309   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:42.974381   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:43.010388   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:43.010416   47309 cri.go:89] found id: ""
	I0626 20:56:43.010425   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:43.010482   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.015524   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:43.015582   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:43.049074   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.049103   47309 cri.go:89] found id: ""
	I0626 20:56:43.049112   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:43.049173   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.053750   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:43.053814   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:43.096699   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:43.096727   47309 cri.go:89] found id: ""
	I0626 20:56:43.096734   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:43.096776   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.101210   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:43.101264   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:43.133316   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:43.133344   47309 cri.go:89] found id: ""
	I0626 20:56:43.133354   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:43.133420   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.138226   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:43.138289   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:43.169863   47309 cri.go:89] found id: ""
	I0626 20:56:43.169896   47309 logs.go:284] 0 containers: []
	W0626 20:56:43.169903   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:43.169908   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:43.169962   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:43.201859   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.201884   47309 cri.go:89] found id: ""
	I0626 20:56:43.201892   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:43.201942   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.207043   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:43.207072   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.264723   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:43.264755   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.301988   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.302016   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:43.344103   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:43.344132   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:43.357414   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:43.357445   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:43.486425   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:43.486453   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:43.529205   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:43.529239   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:43.575311   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:43.575344   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:44.074749   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:44.074790   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:44.184946   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:44.184987   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:44.221993   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:44.222028   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:44.263095   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:44.263127   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:46.817987   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:46.818014   47309 system_pods.go:61] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.818019   47309 system_pods.go:61] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.818023   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.818027   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.818031   47309 system_pods.go:61] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.818035   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.818041   47309 system_pods.go:61] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.818047   47309 system_pods.go:61] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.818052   47309 system_pods.go:74] duration metric: took 3.923762125s to wait for pod list to return data ...
	I0626 20:56:46.818061   47309 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:46.821789   47309 default_sa.go:45] found service account: "default"
	I0626 20:56:46.821811   47309 default_sa.go:55] duration metric: took 3.746079ms for default service account to be created ...
	I0626 20:56:46.821818   47309 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:46.830080   47309 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:46.830117   47309 system_pods.go:89] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.830127   47309 system_pods.go:89] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.830134   47309 system_pods.go:89] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.830141   47309 system_pods.go:89] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.830147   47309 system_pods.go:89] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.830153   47309 system_pods.go:89] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.830165   47309 system_pods.go:89] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.830178   47309 system_pods.go:89] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.830186   47309 system_pods.go:126] duration metric: took 8.363064ms to wait for k8s-apps to be running ...
	I0626 20:56:46.830198   47309 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:46.830250   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:46.851429   47309 system_svc.go:56] duration metric: took 21.223321ms WaitForService to wait for kubelet.
	I0626 20:56:46.851456   47309 kubeadm.go:581] duration metric: took 4m26.355992846s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:46.851482   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:46.856152   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:46.856177   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:46.856187   47309 node_conditions.go:105] duration metric: took 4.700595ms to run NodePressure ...
	I0626 20:56:46.856197   47309 start.go:228] waiting for startup goroutines ...
	I0626 20:56:46.856203   47309 start.go:233] waiting for cluster config update ...
	I0626 20:56:46.856212   47309 start.go:242] writing updated cluster config ...
	I0626 20:56:46.856472   47309 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:46.911414   47309 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:46.913280   47309 out.go:177] * Done! kubectl is now configured to use "no-preload-934450" cluster and "default" namespace by default
	I0626 20:56:45.561459   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:56:45.567555   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:56:45.568704   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:45.568720   47605 api_server.go:131] duration metric: took 3.984252941s to wait for apiserver health ...
	I0626 20:56:45.568728   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:45.568745   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:45.568789   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:45.608235   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:45.608261   47605 cri.go:89] found id: ""
	I0626 20:56:45.608270   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:45.608335   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.612705   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:45.612774   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:45.649330   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.649353   47605 cri.go:89] found id: ""
	I0626 20:56:45.649362   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:45.649440   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.655104   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:45.655178   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:45.699690   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.699711   47605 cri.go:89] found id: ""
	I0626 20:56:45.699722   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:45.699767   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.704455   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:45.704515   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:45.743181   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:45.743209   47605 cri.go:89] found id: ""
	I0626 20:56:45.743218   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:45.743283   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.748030   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:45.748098   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:45.787325   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:45.787352   47605 cri.go:89] found id: ""
	I0626 20:56:45.787360   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:45.787406   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.792119   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:45.792191   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:45.833192   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:45.833215   47605 cri.go:89] found id: ""
	I0626 20:56:45.833222   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:45.833279   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.838399   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:45.838464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:45.878372   47605 cri.go:89] found id: ""
	I0626 20:56:45.878403   47605 logs.go:284] 0 containers: []
	W0626 20:56:45.878410   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:45.878415   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:45.878464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:45.917051   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:45.917074   47605 cri.go:89] found id: ""
	I0626 20:56:45.917081   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:45.917125   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.921484   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:45.921508   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.962659   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:45.962699   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.993644   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:45.993674   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:46.055087   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:46.055130   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:46.574535   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:46.574581   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:46.617139   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:46.617174   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:46.729727   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:46.729768   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:46.860871   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:46.860908   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:46.922618   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:46.922657   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:46.975973   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:46.976000   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:47.017458   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:47.017488   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:47.058540   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:47.058567   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:49.582112   47605 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:49.582139   47605 system_pods.go:61] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.582145   47605 system_pods.go:61] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.582149   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.582153   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.582157   47605 system_pods.go:61] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.582163   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.582169   47605 system_pods.go:61] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.582175   47605 system_pods.go:61] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.582180   47605 system_pods.go:74] duration metric: took 4.013448182s to wait for pod list to return data ...
	I0626 20:56:49.582187   47605 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:49.588793   47605 default_sa.go:45] found service account: "default"
	I0626 20:56:49.588827   47605 default_sa.go:55] duration metric: took 6.634132ms for default service account to be created ...
	I0626 20:56:49.588836   47605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:49.596519   47605 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:49.596549   47605 system_pods.go:89] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.596555   47605 system_pods.go:89] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.596562   47605 system_pods.go:89] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.596570   47605 system_pods.go:89] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.596577   47605 system_pods.go:89] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.596585   47605 system_pods.go:89] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.596600   47605 system_pods.go:89] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.596612   47605 system_pods.go:89] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.596622   47605 system_pods.go:126] duration metric: took 7.781697ms to wait for k8s-apps to be running ...
	I0626 20:56:49.596633   47605 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:49.596684   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:49.613188   47605 system_svc.go:56] duration metric: took 16.545322ms WaitForService to wait for kubelet.
	I0626 20:56:49.613212   47605 kubeadm.go:581] duration metric: took 4m17.557252465s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:49.613231   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:49.616820   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:49.616845   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:49.616854   47605 node_conditions.go:105] duration metric: took 3.619443ms to run NodePressure ...
	I0626 20:56:49.616864   47605 start.go:228] waiting for startup goroutines ...
	I0626 20:56:49.616870   47605 start.go:233] waiting for cluster config update ...
	I0626 20:56:49.616878   47605 start.go:242] writing updated cluster config ...
	I0626 20:56:49.617126   47605 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:49.665468   47605 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:49.667447   47605 out.go:177] * Done! kubectl is now configured to use "embed-certs-299839" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:46:25 UTC, ends at Mon 2023-06-26 21:05:48 UTC. --
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.284840197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c9321f9-ee9f-44d4-b380-26d7f935de63 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.519160415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f5f7cc2-9e76-4dff-a2d0-7ecee6e58227 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.519223293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f5f7cc2-9e76-4dff-a2d0-7ecee6e58227 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.519492969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f5f7cc2-9e76-4dff-a2d0-7ecee6e58227 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.563802974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=76ee4826-d5dc-47c4-904c-e3a0ca08a9b1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.563872429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=76ee4826-d5dc-47c4-904c-e3a0ca08a9b1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.564125001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=76ee4826-d5dc-47c4-904c-e3a0ca08a9b1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.603427359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dfb51965-9939-4fcc-bbbd-3aa80bd07b87 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.603520130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dfb51965-9939-4fcc-bbbd-3aa80bd07b87 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.603687523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dfb51965-9939-4fcc-bbbd-3aa80bd07b87 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.642566648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d5131022-8094-4335-995c-d1b9607d8dd6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.642635844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d5131022-8094-4335-995c-d1b9607d8dd6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.642861409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d5131022-8094-4335-995c-d1b9607d8dd6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.678456690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bdba5bc1-f942-40e9-835d-4fc3ee9f8aa0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.678588840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bdba5bc1-f942-40e9-835d-4fc3ee9f8aa0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.678761442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bdba5bc1-f942-40e9-835d-4fc3ee9f8aa0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.720919165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fab05e9c-c9d7-4ed6-8fb2-04bca2add5d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.721094737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fab05e9c-c9d7-4ed6-8fb2-04bca2add5d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.721311618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fab05e9c-c9d7-4ed6-8fb2-04bca2add5d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.767798923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e1392c2-c6e4-475c-a2d8-bb0d2baed3f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.767890357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e1392c2-c6e4-475c-a2d8-bb0d2baed3f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.768228807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e1392c2-c6e4-475c-a2d8-bb0d2baed3f2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.806472599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a47523de-31b9-4cec-a54b-442417b9ecb9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.806596323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a47523de-31b9-4cec-a54b-442417b9ecb9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:48 no-preload-934450 crio[733]: time="2023-06-26 21:05:48.806799115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a47523de-31b9-4cec-a54b-442417b9ecb9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	cce86e4ac6d10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   f586bb4316f81
	d9a74ded05e96       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   13 minutes ago      Running             kube-proxy                0                   a82180c52ff4b
	3f594979249ec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   c73b0e0df73f1
	4bf419c5667b7       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   13 minutes ago      Running             kube-scheduler            2                   1504527283bce
	9c97d6872e3eb       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   13 minutes ago      Running             kube-controller-manager   2                   f49b16e9b3279
	677700e637cf7       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   13 minutes ago      Running             kube-apiserver            2                   fae64fcf8ca26
	d8bd0503ff17e       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   13 minutes ago      Running             etcd                      2                   8796f4334146d
	
	* 
	* ==> coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48469 - 43928 "HINFO IN 3872093173642719776.143745242958422132. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013394887s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-934450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-934450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=no-preload-934450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-934450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 21:05:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:02:39 +0000   Mon, 26 Jun 2023 20:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:02:39 +0000   Mon, 26 Jun 2023 20:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:02:39 +0000   Mon, 26 Jun 2023 20:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:02:39 +0000   Mon, 26 Jun 2023 20:52:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.38
	  Hostname:    no-preload-934450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 13027f91e921404a858d73b7fe3591c7
	  System UUID:                13027f91-e921-404a-858d-73b7fe3591c7
	  Boot ID:                    97e1de77-4b2f-4df0-b11c-e0ff0e97cf17
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-xm96k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-934450                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-934450             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-934450    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-jhz99                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-934450             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-74d5c6b9c-4dkpm               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-934450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-934450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-934450 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-934450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-934450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-934450 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node no-preload-934450 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node no-preload-934450 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-934450 event: Registered Node no-preload-934450 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun26 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071946] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.100875] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.344185] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143847] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.388945] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.661429] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.096753] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.139933] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.111150] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +0.213712] systemd-fstab-generator[718]: Ignoring "noauto" for root device
	[Jun26 20:47] systemd-fstab-generator[1248]: Ignoring "noauto" for root device
	[ +18.928703] kauditd_printk_skb: 29 callbacks suppressed
	[Jun26 20:51] systemd-fstab-generator[3853]: Ignoring "noauto" for root device
	[Jun26 20:52] systemd-fstab-generator[4184]: Ignoring "noauto" for root device
	[ +26.819780] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] <==
	* {"level":"info","ts":"2023-06-26T20:52:01.400Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"21c1ddd48015c0d4","initial-advertise-peer-urls":["https://192.168.50.38:2380"],"listen-peer-urls":["https://192.168.50.38:2380"],"advertise-client-urls":["https://192.168.50.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-26T20:52:01.400Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-26T20:52:01.401Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.38:2380"}
	{"level":"info","ts":"2023-06-26T20:52:01.401Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.38:2380"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21c1ddd48015c0d4 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21c1ddd48015c0d4 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21c1ddd48015c0d4 received MsgPreVoteResp from 21c1ddd48015c0d4 at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21c1ddd48015c0d4 became candidate at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21c1ddd48015c0d4 received MsgVoteResp from 21c1ddd48015c0d4 at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21c1ddd48015c0d4 became leader at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:01.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 21c1ddd48015c0d4 elected leader 21c1ddd48015c0d4 at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:01.633Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:01.635Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"21c1ddd48015c0d4","local-member-attributes":"{Name:no-preload-934450 ClientURLs:[https://192.168.50.38:2379]}","request-path":"/0/members/21c1ddd48015c0d4/attributes","cluster-id":"8f09f9d2d10c62aa","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-26T20:52:01.637Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:01.638Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8f09f9d2d10c62aa","local-member-id":"21c1ddd48015c0d4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:01.638Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:01.638Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:01.638Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:01.639Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-26T20:52:01.639Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-26T20:52:01.639Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-26T20:52:01.640Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.38:2379"}
	{"level":"info","ts":"2023-06-26T21:02:01.685Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2023-06-26T21:02:01.687Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":725,"took":"2.357682ms","hash":2735759071}
	{"level":"info","ts":"2023-06-26T21:02:01.687Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2735759071,"revision":725,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  21:05:49 up 19 min,  0 users,  load average: 0.01, 0.12, 0.17
	Linux no-preload-934450 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] <==
	* E0626 21:02:04.798494       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:02:04.798530       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0626 21:02:04.798631       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:02:04.800623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:03:03.645443       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.25.93:443: connect: connection refused
	I0626 21:03:03.645682       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:03:04.799792       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:03:04.799979       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:03:04.800163       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:03:04.800940       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:03:04.801060       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:03:04.801223       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:04:03.645988       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.25.93:443: connect: connection refused
	I0626 21:04:03.646295       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0626 21:05:03.644983       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.25.93:443: connect: connection refused
	I0626 21:05:03.645428       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:05:04.801151       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:05:04.801402       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:05:04.801450       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:05:04.801417       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:05:04.801523       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:05:04.803375       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] <==
	* E0626 20:59:48.848445       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 20:59:49.318722       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:00:18.855082       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:00:19.328484       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:00:48.862401       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:00:49.337753       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:01:18.871742       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:01:19.349550       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:01:48.876938       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:01:49.357354       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:02:18.883386       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:02:19.366954       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:02:48.889575       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:02:49.378618       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:18.897975       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:19.388477       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:48.904460       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:49.397314       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:18.910204       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:19.406179       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:48.916139       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:49.418225       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:18.923199       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:19.429673       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:48.933150       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	* 
	* ==> kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] <==
	* I0626 20:52:24.495965       1 node.go:141] Successfully retrieved node IP: 192.168.50.38
	I0626 20:52:24.496707       1 server_others.go:110] "Detected node IP" address="192.168.50.38"
	I0626 20:52:24.496752       1 server_others.go:554] "Using iptables proxy"
	I0626 20:52:24.670180       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:52:24.670232       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:52:24.670305       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:52:24.670787       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:52:24.670834       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:52:24.672712       1 config.go:188] "Starting service config controller"
	I0626 20:52:24.672754       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:52:24.677172       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:52:24.677243       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:52:24.682546       1 config.go:315] "Starting node config controller"
	I0626 20:52:24.682621       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:52:24.773081       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:52:24.778409       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:52:24.783070       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] <==
	* W0626 20:52:03.822238       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:03.822295       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:04.700227       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:52:04.700288       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 20:52:04.759847       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0626 20:52:04.759965       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0626 20:52:04.814460       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:52:04.814514       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0626 20:52:04.860165       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:04.860219       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:04.884732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:52:04.884788       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 20:52:04.933687       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:52:04.933765       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 20:52:04.957607       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:04.957664       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:04.976781       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:52:04.976842       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:52:04.984079       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:52:04.984117       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:52:05.087420       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:05.087477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:05.315583       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:52:05.315638       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0626 20:52:08.292894       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:46:25 UTC, ends at Mon 2023-06-26 21:05:49 UTC. --
	Jun 26 21:03:07 no-preload-934450 kubelet[4191]: E0626 21:03:07.344377    4191 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:03:07 no-preload-934450 kubelet[4191]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:03:07 no-preload-934450 kubelet[4191]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:03:07 no-preload-934450 kubelet[4191]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:03:08 no-preload-934450 kubelet[4191]: E0626 21:03:08.204207    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:03:19 no-preload-934450 kubelet[4191]: E0626 21:03:19.204651    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:03:32 no-preload-934450 kubelet[4191]: E0626 21:03:32.203621    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:03:43 no-preload-934450 kubelet[4191]: E0626 21:03:43.203477    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:03:54 no-preload-934450 kubelet[4191]: E0626 21:03:54.203765    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:04:07 no-preload-934450 kubelet[4191]: E0626 21:04:07.342721    4191 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:04:07 no-preload-934450 kubelet[4191]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:04:07 no-preload-934450 kubelet[4191]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:04:07 no-preload-934450 kubelet[4191]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:04:09 no-preload-934450 kubelet[4191]: E0626 21:04:09.203760    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:04:24 no-preload-934450 kubelet[4191]: E0626 21:04:24.203256    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:04:38 no-preload-934450 kubelet[4191]: E0626 21:04:38.204428    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:04:49 no-preload-934450 kubelet[4191]: E0626 21:04:49.208553    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:05:01 no-preload-934450 kubelet[4191]: E0626 21:05:01.204187    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:05:07 no-preload-934450 kubelet[4191]: E0626 21:05:07.348422    4191 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:05:07 no-preload-934450 kubelet[4191]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:05:07 no-preload-934450 kubelet[4191]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:05:07 no-preload-934450 kubelet[4191]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:05:16 no-preload-934450 kubelet[4191]: E0626 21:05:16.204493    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:05:28 no-preload-934450 kubelet[4191]: E0626 21:05:28.203806    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:05:43 no-preload-934450 kubelet[4191]: E0626 21:05:43.203488    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	
	* 
	* ==> storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] <==
	* I0626 20:52:25.057729       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:52:25.074122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:52:25.074351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:52:25.089465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:52:25.092288       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-934450_376175c8-174a-41b5-aa54-24ec858da196!
	I0626 20:52:25.092545       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ee38453-12ec-41a3-9a9e-be92985c03a2", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-934450_376175c8-174a-41b5-aa54-24ec858da196 became leader
	I0626 20:52:25.192707       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-934450_376175c8-174a-41b5-aa54-24ec858da196!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-934450 -n no-preload-934450
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-934450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-4dkpm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-934450 describe pod metrics-server-74d5c6b9c-4dkpm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-934450 describe pod metrics-server-74d5c6b9c-4dkpm: exit status 1 (89.418356ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-4dkpm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-934450 describe pod metrics-server-74d5c6b9c-4dkpm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0626 20:58:30.705586   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:59:00.824698   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 20:59:53.753302   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 21:01:48.326995   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299839 -n embed-certs-299839
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:05:50.271051799 +0000 UTC m=+5414.771079645
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-299839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-299839 logs -n 25: (1.716960378s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490377        | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-123924                              | stopped-upgrade-123924       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603225 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | disable-driver-mounts-603225                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934450             | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC | 26 Jun 23 20:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490377             | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 20:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 20:44:35.222921   47779 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:44:35.223059   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223070   47779 out.go:309] Setting ErrFile to fd 2...
	I0626 20:44:35.223074   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223199   47779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:44:35.223797   47779 out.go:303] Setting JSON to false
	I0626 20:44:35.224674   47779 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5222,"bootTime":1687807053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:44:35.224734   47779 start.go:137] virtualization: kvm guest
	I0626 20:44:35.226901   47779 out.go:177] * [default-k8s-diff-port-473235] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:44:35.228842   47779 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:44:35.228804   47779 notify.go:220] Checking for updates...
	I0626 20:44:35.230224   47779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:44:35.231788   47779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:44:35.233239   47779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:44:35.234554   47779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:44:35.236823   47779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:44:35.238432   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:44:35.238825   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.238878   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.253669   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0626 20:44:35.254014   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.254589   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.254610   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.254907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.255090   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.255322   47779 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:44:35.255597   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.255627   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.269620   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0626 20:44:35.270027   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.270571   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.270599   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.270857   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.271037   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.302607   47779 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:44:35.303877   47779 start.go:297] selected driver: kvm2
	I0626 20:44:35.303889   47779 start.go:954] validating driver "kvm2" against &{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.303997   47779 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:44:35.304600   47779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.304681   47779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:44:35.319036   47779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:44:35.319459   47779 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 20:44:35.319499   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:44:35.319516   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:44:35.319532   47779 start_flags.go:319] config:
	{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-47323
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.319725   47779 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.321690   47779 out.go:177] * Starting control plane node default-k8s-diff-port-473235 in cluster default-k8s-diff-port-473235
	I0626 20:44:33.713644   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:35.323076   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:44:35.323119   47779 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 20:44:35.323145   47779 cache.go:57] Caching tarball of preloaded images
	I0626 20:44:35.323245   47779 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:44:35.323260   47779 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:44:35.323385   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:44:35.323607   47779 start.go:365] acquiring machines lock for default-k8s-diff-port-473235: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:44:39.793629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:42.865602   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:48.945651   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:52.017646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:58.097650   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:01.169629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:07.249647   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:10.321634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:16.401660   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:19.473641   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:25.553634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:28.625721   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:34.705617   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:37.777753   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:43.857659   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:46.929661   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:53.009637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:56.081646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:02.161637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:05.233633   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:11.313640   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:14.317303   47309 start.go:369] acquired machines lock for "no-preload-934450" in 2m47.59820508s
	I0626 20:46:14.317355   47309 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:14.317388   47309 fix.go:54] fixHost starting: 
	I0626 20:46:14.317703   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:14.317733   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:14.331991   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0626 20:46:14.332358   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:14.332862   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:46:14.332888   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:14.333180   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:14.333368   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:14.333556   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:46:14.334930   47309 fix.go:102] recreateIfNeeded on no-preload-934450: state=Stopped err=<nil>
	I0626 20:46:14.334954   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	W0626 20:46:14.335122   47309 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:14.336692   47309 out.go:177] * Restarting existing kvm2 VM for "no-preload-934450" ...
	I0626 20:46:14.338056   47309 main.go:141] libmachine: (no-preload-934450) Calling .Start
	I0626 20:46:14.338201   47309 main.go:141] libmachine: (no-preload-934450) Ensuring networks are active...
	I0626 20:46:14.339255   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network default is active
	I0626 20:46:14.339575   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network mk-no-preload-934450 is active
	I0626 20:46:14.339980   47309 main.go:141] libmachine: (no-preload-934450) Getting domain xml...
	I0626 20:46:14.340638   47309 main.go:141] libmachine: (no-preload-934450) Creating domain...
	I0626 20:46:15.550725   47309 main.go:141] libmachine: (no-preload-934450) Waiting to get IP...
	I0626 20:46:15.551641   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.552053   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.552125   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.552057   48070 retry.go:31] will retry after 285.629833ms: waiting for machine to come up
	I0626 20:46:15.839584   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.839950   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.839976   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.839920   48070 retry.go:31] will retry after 318.234269ms: waiting for machine to come up
	I0626 20:46:16.159361   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.159793   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.159823   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.159752   48070 retry.go:31] will retry after 486.280811ms: waiting for machine to come up
	I0626 20:46:14.315357   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:14.315401   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:46:14.317194   46683 machine.go:91] provisioned docker machine in 4m37.381545898s
	I0626 20:46:14.317230   46683 fix.go:56] fixHost completed within 4m37.403983922s
	I0626 20:46:14.317236   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 4m37.404002624s
	W0626 20:46:14.317252   46683 start.go:672] error starting host: provision: host is not running
	W0626 20:46:14.317326   46683 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0626 20:46:14.317333   46683 start.go:687] Will try again in 5 seconds ...
	I0626 20:46:16.647364   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.647777   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.647803   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.647721   48070 retry.go:31] will retry after 396.658606ms: waiting for machine to come up
	I0626 20:46:17.046604   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.047131   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.047156   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.047033   48070 retry.go:31] will retry after 741.382401ms: waiting for machine to come up
	I0626 20:46:17.789616   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.790035   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.790068   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.790014   48070 retry.go:31] will retry after 636.769895ms: waiting for machine to come up
	I0626 20:46:18.427899   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:18.428300   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:18.428326   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:18.428272   48070 retry.go:31] will retry after 869.736092ms: waiting for machine to come up
	I0626 20:46:19.299429   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:19.299742   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:19.299765   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:19.299717   48070 retry.go:31] will retry after 1.261709663s: waiting for machine to come up
	I0626 20:46:20.563421   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:20.563778   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:20.563807   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:20.563751   48070 retry.go:31] will retry after 1.280588584s: waiting for machine to come up
	I0626 20:46:19.318965   46683 start.go:365] acquiring machines lock for old-k8s-version-490377: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:46:21.846094   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:21.846530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:21.846557   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:21.846475   48070 retry.go:31] will retry after 1.542478163s: waiting for machine to come up
	I0626 20:46:23.391088   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:23.391530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:23.391559   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:23.391474   48070 retry.go:31] will retry after 2.115450652s: waiting for machine to come up
	I0626 20:46:25.508447   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:25.508882   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:25.508915   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:25.508826   48070 retry.go:31] will retry after 3.403199971s: waiting for machine to come up
	I0626 20:46:28.916347   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:28.916756   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:28.916782   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:28.916706   48070 retry.go:31] will retry after 3.011345508s: waiting for machine to come up
	I0626 20:46:33.094365   47605 start.go:369] acquired machines lock for "embed-certs-299839" in 2m23.878841424s
	I0626 20:46:33.094419   47605 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:33.094440   47605 fix.go:54] fixHost starting: 
	I0626 20:46:33.094827   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:33.094856   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:33.114045   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0626 20:46:33.114400   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:33.114927   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:46:33.114949   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:33.115244   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:33.115434   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:33.115573   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:46:33.116751   47605 fix.go:102] recreateIfNeeded on embed-certs-299839: state=Stopped err=<nil>
	I0626 20:46:33.116783   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	W0626 20:46:33.116944   47605 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:33.119904   47605 out.go:177] * Restarting existing kvm2 VM for "embed-certs-299839" ...
	I0626 20:46:33.121277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Start
	I0626 20:46:33.121442   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring networks are active...
	I0626 20:46:33.122062   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network default is active
	I0626 20:46:33.122397   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network mk-embed-certs-299839 is active
	I0626 20:46:33.122783   47605 main.go:141] libmachine: (embed-certs-299839) Getting domain xml...
	I0626 20:46:33.123400   47605 main.go:141] libmachine: (embed-certs-299839) Creating domain...
	I0626 20:46:31.930997   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931492   47309 main.go:141] libmachine: (no-preload-934450) Found IP for machine: 192.168.50.38
	I0626 20:46:31.931507   47309 main.go:141] libmachine: (no-preload-934450) Reserving static IP address...
	I0626 20:46:31.931524   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has current primary IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931877   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.931901   47309 main.go:141] libmachine: (no-preload-934450) DBG | skip adding static IP to network mk-no-preload-934450 - found existing host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"}
	I0626 20:46:31.931916   47309 main.go:141] libmachine: (no-preload-934450) Reserved static IP address: 192.168.50.38
	I0626 20:46:31.931928   47309 main.go:141] libmachine: (no-preload-934450) DBG | Getting to WaitForSSH function...
	I0626 20:46:31.931939   47309 main.go:141] libmachine: (no-preload-934450) Waiting for SSH to be available...
	I0626 20:46:31.934393   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934786   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.934814   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934954   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH client type: external
	I0626 20:46:31.934971   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa (-rw-------)
	I0626 20:46:31.935060   47309 main.go:141] libmachine: (no-preload-934450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:31.935091   47309 main.go:141] libmachine: (no-preload-934450) DBG | About to run SSH command:
	I0626 20:46:31.935112   47309 main.go:141] libmachine: (no-preload-934450) DBG | exit 0
	I0626 20:46:32.021036   47309 main.go:141] libmachine: (no-preload-934450) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:32.021357   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetConfigRaw
	I0626 20:46:32.022056   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.024943   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025390   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.025426   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025663   47309 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/config.json ...
	I0626 20:46:32.025851   47309 machine.go:88] provisioning docker machine ...
	I0626 20:46:32.025868   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.026092   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026257   47309 buildroot.go:166] provisioning hostname "no-preload-934450"
	I0626 20:46:32.026280   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026450   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.028213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028583   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.028618   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028699   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.028869   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029019   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029154   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.029415   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.029867   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.029887   47309 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934450 && echo "no-preload-934450" | sudo tee /etc/hostname
	I0626 20:46:32.150597   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934450
	
	I0626 20:46:32.150629   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.153096   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153441   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.153486   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153576   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.153773   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.153984   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.154125   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.154288   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.154697   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.154723   47309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:32.270792   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:32.270827   47309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:32.270890   47309 buildroot.go:174] setting up certificates
	I0626 20:46:32.270902   47309 provision.go:83] configureAuth start
	I0626 20:46:32.270922   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.271206   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.273824   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274189   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.274213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274310   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.276495   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.276896   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.276927   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.277062   47309 provision.go:138] copyHostCerts
	I0626 20:46:32.277118   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:32.277126   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:32.277188   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:32.277271   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:32.277278   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:32.277300   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:32.277351   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:32.277357   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:32.277393   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:32.277450   47309 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.no-preload-934450 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube no-preload-934450]
	I0626 20:46:32.417361   47309 provision.go:172] copyRemoteCerts
	I0626 20:46:32.417430   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:32.417452   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.419946   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420300   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.420331   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420501   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.420703   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.420864   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.421017   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.501807   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:46:32.524284   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:32.546766   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0626 20:46:32.569677   47309 provision.go:86] duration metric: configureAuth took 298.742863ms
	I0626 20:46:32.569711   47309 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:32.569925   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:32.570026   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.572516   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.572864   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.572901   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.573011   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.573178   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573350   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573492   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.573646   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.574084   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.574102   47309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:32.859482   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:32.859509   47309 machine.go:91] provisioned docker machine in 833.647496ms
	I0626 20:46:32.859519   47309 start.go:300] post-start starting for "no-preload-934450" (driver="kvm2")
	I0626 20:46:32.859527   47309 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:32.859543   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.859892   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:32.859942   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.862731   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863099   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.863131   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863250   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.863434   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.863570   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.863698   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.946748   47309 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:32.951257   47309 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:32.951278   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:32.951351   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:32.951436   47309 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:32.951516   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:32.959676   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:32.982687   47309 start.go:303] post-start completed in 123.154915ms
	I0626 20:46:32.982714   47309 fix.go:56] fixHost completed within 18.665325334s
	I0626 20:46:32.982763   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.985318   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985693   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.985725   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985868   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.986072   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986226   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986388   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.986547   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.986951   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.986968   47309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:33.094211   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812393.043726278
	
	I0626 20:46:33.094239   47309 fix.go:206] guest clock: 1687812393.043726278
	I0626 20:46:33.094248   47309 fix.go:219] Guest: 2023-06-26 20:46:33.043726278 +0000 UTC Remote: 2023-06-26 20:46:32.98271893 +0000 UTC m=+186.399054274 (delta=61.007348ms)
	I0626 20:46:33.094272   47309 fix.go:190] guest clock delta is within tolerance: 61.007348ms
	I0626 20:46:33.094277   47309 start.go:83] releasing machines lock for "no-preload-934450", held for 18.776943332s
	I0626 20:46:33.094309   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.094577   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:33.097365   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097744   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.097775   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097979   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098382   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098586   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098661   47309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:33.098712   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.098797   47309 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:33.098816   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.101252   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101554   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101580   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101599   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101719   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.101873   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.101951   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101981   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.102007   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102160   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.102182   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.102316   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.102443   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102551   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.210044   47309 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:33.215912   47309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:33.359955   47309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:33.366146   47309 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:33.366217   47309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:33.380504   47309 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:33.380526   47309 start.go:466] detecting cgroup driver to use...
	I0626 20:46:33.380579   47309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:33.393306   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:33.404983   47309 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:33.405038   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:33.418216   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:33.432337   47309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:33.531250   47309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:33.645556   47309 docker.go:212] disabling docker service ...
	I0626 20:46:33.645633   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:33.659515   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:33.671856   47309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:33.774921   47309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:33.883215   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:33.898847   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:33.917506   47309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:33.917580   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.928683   47309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:33.928743   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.939242   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.949833   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.960544   47309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:33.970988   47309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:33.979977   47309 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:33.980018   47309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:33.992692   47309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:34.001898   47309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:34.099514   47309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:34.265988   47309 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:34.266060   47309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:34.273678   47309 start.go:534] Will wait 60s for crictl version
	I0626 20:46:34.273739   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.277401   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:34.312548   47309 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:34.312630   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.360715   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.413882   47309 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:34.415181   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:34.417841   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418166   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:34.418189   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418410   47309 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:34.422651   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:34.434668   47309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:34.434717   47309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:34.465589   47309 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:34.465614   47309 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:46:34.465690   47309 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.465708   47309 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.465738   47309 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.465754   47309 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.465788   47309 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.465828   47309 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.465693   47309 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.465936   47309 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.467120   47309 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.467219   47309 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.467247   47309 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.467295   47309 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.467306   47309 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.467250   47309 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.636874   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.655059   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.683826   47309 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0626 20:46:34.683861   47309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.683928   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.702952   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.703028   47309 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0626 20:46:34.703071   47309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.703103   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.741790   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.741897   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0626 20:46:34.742006   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.746779   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.749151   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0626 20:46:34.759216   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.760925   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.763727   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.802768   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0626 20:46:34.802855   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0626 20:46:34.802879   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802936   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802879   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:34.875629   47309 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0626 20:46:34.875683   47309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.875741   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976009   47309 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0626 20:46:34.976048   47309 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.976082   47309 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0626 20:46:34.976100   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976116   47309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.976117   47309 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0626 20:46:34.976143   47309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.976156   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976179   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:35.433285   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.379704   47605 main.go:141] libmachine: (embed-certs-299839) Waiting to get IP...
	I0626 20:46:34.380770   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.381274   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.381362   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.381264   48187 retry.go:31] will retry after 291.849421ms: waiting for machine to come up
	I0626 20:46:34.674760   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.675247   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.675276   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.675192   48187 retry.go:31] will retry after 276.057593ms: waiting for machine to come up
	I0626 20:46:34.952573   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.953045   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.953077   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.953003   48187 retry.go:31] will retry after 360.478931ms: waiting for machine to come up
	I0626 20:46:35.315537   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.316036   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.316057   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.315988   48187 retry.go:31] will retry after 582.62072ms: waiting for machine to come up
	I0626 20:46:35.899816   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.900171   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.900232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.900154   48187 retry.go:31] will retry after 502.843212ms: waiting for machine to come up
	I0626 20:46:36.404792   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:36.405188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:36.405222   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:36.405134   48187 retry.go:31] will retry after 594.811848ms: waiting for machine to come up
	I0626 20:46:37.001827   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:37.002238   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:37.002264   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:37.002182   48187 retry.go:31] will retry after 1.067889284s: waiting for machine to come up
	I0626 20:46:38.071685   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:38.072135   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:38.072158   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:38.072094   48187 retry.go:31] will retry after 1.189834776s: waiting for machine to come up
	I0626 20:46:36.844137   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (2.041169028s)
	I0626 20:46:36.844171   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0626 20:46:36.844205   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.041210189s)
	I0626 20:46:36.844232   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0626 20:46:36.844245   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844257   47309 ssh_runner.go:235] Completed: which crictl: (1.868146562s)
	I0626 20:46:36.844293   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844300   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:36.844234   47309 ssh_runner.go:235] Completed: which crictl: (1.968483663s)
	I0626 20:46:36.844349   47309 ssh_runner.go:235] Completed: which crictl: (1.868154335s)
	I0626 20:46:36.844364   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:36.844380   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:36.844405   47309 ssh_runner.go:235] Completed: which crictl: (1.868235538s)
	I0626 20:46:36.844428   47309 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.411115015s)
	I0626 20:46:36.844448   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:36.844455   47309 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0626 20:46:36.844488   47309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:36.844513   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:39.895683   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.051359255s)
	I0626 20:46:39.895720   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0626 20:46:39.895808   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0: (3.051484848s)
	I0626 20:46:39.895824   47309 ssh_runner.go:235] Completed: which crictl: (3.051289954s)
	I0626 20:46:39.895855   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0626 20:46:39.895873   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1: (3.051494383s)
	I0626 20:46:39.895888   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:39.895908   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0626 20:46:39.895950   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:39.895909   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3: (3.051516174s)
	I0626 20:46:39.895990   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:39.896000   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3: (3.051535924s)
	I0626 20:46:39.896033   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0626 20:46:39.896034   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0626 20:46:39.896089   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.896102   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901778   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0626 20:46:39.901797   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901830   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.911439   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0626 20:46:39.911477   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0626 20:46:39.911517   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0626 20:46:39.943818   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0626 20:46:39.943947   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:41.278134   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.334156546s)
	I0626 20:46:41.278173   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0626 20:46:41.278135   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.376281957s)
	I0626 20:46:41.278187   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0626 20:46:41.278207   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:41.278256   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.263991   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:39.264402   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:39.264433   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:39.264371   48187 retry.go:31] will retry after 1.805262511s: waiting for machine to come up
	I0626 20:46:41.071232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:41.071707   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:41.071731   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:41.071662   48187 retry.go:31] will retry after 1.945519102s: waiting for machine to come up
	I0626 20:46:43.018581   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:43.019039   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:43.019075   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:43.018983   48187 retry.go:31] will retry after 2.83662877s: waiting for machine to come up
	I0626 20:46:43.745408   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.467115523s)
	I0626 20:46:43.745443   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0626 20:46:43.745479   47309 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:43.745551   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:45.011214   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.26563338s)
	I0626 20:46:45.011266   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0626 20:46:45.011296   47309 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.011349   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.858520   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:45.858992   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:45.859026   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:45.858941   48187 retry.go:31] will retry after 2.332305212s: waiting for machine to come up
	I0626 20:46:48.193085   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:48.193594   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:48.193625   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:48.193543   48187 retry.go:31] will retry after 2.846333425s: waiting for machine to come up
	I0626 20:46:52.634333   47779 start.go:369] acquired machines lock for "default-k8s-diff-port-473235" in 2m17.310683576s
	I0626 20:46:52.634385   47779 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:52.634413   47779 fix.go:54] fixHost starting: 
	I0626 20:46:52.634850   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:52.634890   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:52.654153   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0626 20:46:52.654638   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:52.655306   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:46:52.655337   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:52.655747   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:52.655952   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:46:52.656158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:46:52.657823   47779 fix.go:102] recreateIfNeeded on default-k8s-diff-port-473235: state=Stopped err=<nil>
	I0626 20:46:52.657850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	W0626 20:46:52.657997   47779 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:52.659722   47779 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-473235" ...
	I0626 20:46:51.043526   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044005   47605 main.go:141] libmachine: (embed-certs-299839) Found IP for machine: 192.168.39.51
	I0626 20:46:51.044034   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has current primary IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044045   47605 main.go:141] libmachine: (embed-certs-299839) Reserving static IP address...
	I0626 20:46:51.044351   47605 main.go:141] libmachine: (embed-certs-299839) Reserved static IP address: 192.168.39.51
	I0626 20:46:51.044368   47605 main.go:141] libmachine: (embed-certs-299839) Waiting for SSH to be available...
	I0626 20:46:51.044405   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.044439   47605 main.go:141] libmachine: (embed-certs-299839) DBG | skip adding static IP to network mk-embed-certs-299839 - found existing host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"}
	I0626 20:46:51.044456   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Getting to WaitForSSH function...
	I0626 20:46:51.046694   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047088   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.047121   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047312   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH client type: external
	I0626 20:46:51.047348   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa (-rw-------)
	I0626 20:46:51.047392   47605 main.go:141] libmachine: (embed-certs-299839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:51.047414   47605 main.go:141] libmachine: (embed-certs-299839) DBG | About to run SSH command:
	I0626 20:46:51.047432   47605 main.go:141] libmachine: (embed-certs-299839) DBG | exit 0
	I0626 20:46:51.137058   47605 main.go:141] libmachine: (embed-certs-299839) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:51.137408   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetConfigRaw
	I0626 20:46:51.197444   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.199920   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200306   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.200339   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200574   47605 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/config.json ...
	I0626 20:46:51.267260   47605 machine.go:88] provisioning docker machine ...
	I0626 20:46:51.267304   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:51.267709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.267921   47605 buildroot.go:166] provisioning hostname "embed-certs-299839"
	I0626 20:46:51.267943   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.268086   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.270429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270762   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.270790   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270886   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.271060   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271200   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271308   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.271475   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.271933   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.271950   47605 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-299839 && echo "embed-certs-299839" | sudo tee /etc/hostname
	I0626 20:46:51.403584   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-299839
	
	I0626 20:46:51.403622   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.406552   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.406876   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.406904   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.407053   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.407335   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407530   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407716   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.407883   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.408280   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.408300   47605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-299839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-299839/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-299839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:51.534666   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:51.534702   47605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:51.534745   47605 buildroot.go:174] setting up certificates
	I0626 20:46:51.534753   47605 provision.go:83] configureAuth start
	I0626 20:46:51.534766   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.535047   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.537753   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538113   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.538141   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.540471   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.540890   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.540922   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.541015   47605 provision.go:138] copyHostCerts
	I0626 20:46:51.541089   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:51.541099   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:51.541155   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:51.541237   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:51.541246   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:51.541277   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:51.541333   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:51.541339   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:51.541357   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:51.541434   47605 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-299839 san=[192.168.39.51 192.168.39.51 localhost 127.0.0.1 minikube embed-certs-299839]
	I0626 20:46:51.873317   47605 provision.go:172] copyRemoteCerts
	I0626 20:46:51.873396   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:51.873427   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.876293   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876659   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.876696   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876889   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.877100   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.877262   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.877430   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:51.970189   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:46:51.993067   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:52.015607   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0626 20:46:52.037359   47605 provision.go:86] duration metric: configureAuth took 502.581033ms
	I0626 20:46:52.037401   47605 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:52.037623   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:52.037714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.040949   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.041486   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041642   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.041859   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042061   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042235   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.042398   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.042916   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.042936   47605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:52.366045   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:52.366072   47605 machine.go:91] provisioned docker machine in 1.098783864s
	I0626 20:46:52.366083   47605 start.go:300] post-start starting for "embed-certs-299839" (driver="kvm2")
	I0626 20:46:52.366112   47605 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:52.366134   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.366443   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:52.366472   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.369138   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369570   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.369630   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369781   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.369957   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.370131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.370278   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.467055   47605 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:52.471203   47605 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:52.471222   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:52.471288   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:52.471394   47605 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:52.471510   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:52.484668   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.510268   47605 start.go:303] post-start completed in 144.162745ms
	I0626 20:46:52.510292   47605 fix.go:56] fixHost completed within 19.415851972s
	I0626 20:46:52.510315   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.513188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513629   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.513662   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513848   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.514062   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514228   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514415   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.514569   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.514968   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.514983   47605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:52.634177   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812412.582368193
	
	I0626 20:46:52.634199   47605 fix.go:206] guest clock: 1687812412.582368193
	I0626 20:46:52.634209   47605 fix.go:219] Guest: 2023-06-26 20:46:52.582368193 +0000 UTC Remote: 2023-06-26 20:46:52.510296584 +0000 UTC m=+163.430129249 (delta=72.071609ms)
	I0626 20:46:52.634237   47605 fix.go:190] guest clock delta is within tolerance: 72.071609ms
	I0626 20:46:52.634242   47605 start.go:83] releasing machines lock for "embed-certs-299839", held for 19.539848437s
	I0626 20:46:52.634277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.634623   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:52.637732   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638182   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.638220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638476   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639040   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639223   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639307   47605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:52.639346   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.639490   47605 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:52.639517   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.642288   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642923   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642968   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643016   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643351   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643492   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643528   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643564   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643763   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.643778   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643973   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643991   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.644109   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.644240   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.761230   47605 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:52.766865   47605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:52.919883   47605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:52.927218   47605 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:52.927290   47605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:52.948916   47605 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:52.948983   47605 start.go:466] detecting cgroup driver to use...
	I0626 20:46:52.949043   47605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:52.968673   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:52.982360   47605 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:52.982416   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:52.996984   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:53.015021   47605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:53.116692   47605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:53.251017   47605 docker.go:212] disabling docker service ...
	I0626 20:46:53.251096   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:53.268097   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:53.282223   47605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:53.412477   47605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:53.528110   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:53.541392   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:53.558736   47605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:53.558809   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.568482   47605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:53.568553   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.578178   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.587728   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.597231   47605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:53.606954   47605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:53.615250   47605 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:53.615308   47605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:53.628161   47605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:53.636477   47605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:53.755919   47605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:53.928744   47605 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:53.928823   47605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:53.934088   47605 start.go:534] Will wait 60s for crictl version
	I0626 20:46:53.934152   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:46:53.939345   47605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:53.971679   47605 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:53.971781   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.013494   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.062724   47605 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:54.064536   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:54.067854   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:54.068254   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068535   47605 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:54.072971   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:54.085981   47605 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:54.086048   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:52.661170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Start
	I0626 20:46:52.661331   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring networks are active...
	I0626 20:46:52.662042   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network default is active
	I0626 20:46:52.662444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network mk-default-k8s-diff-port-473235 is active
	I0626 20:46:52.663218   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Getting domain xml...
	I0626 20:46:52.663876   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Creating domain...
	I0626 20:46:53.987148   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting to get IP...
	I0626 20:46:53.988282   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988739   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988832   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:53.988735   48355 retry.go:31] will retry after 271.192351ms: waiting for machine to come up
	I0626 20:46:54.261343   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261825   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261857   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.261773   48355 retry.go:31] will retry after 362.262293ms: waiting for machine to come up
	I0626 20:46:54.625453   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625951   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625978   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.625859   48355 retry.go:31] will retry after 311.337455ms: waiting for machine to come up
	I0626 20:46:54.938519   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939023   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939053   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.938972   48355 retry.go:31] will retry after 446.154442ms: waiting for machine to come up
	I0626 20:46:52.039929   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.0285527s)
	I0626 20:46:52.039951   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0626 20:46:52.039974   47309 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.040015   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.786422   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0626 20:46:52.786468   47309 cache_images.go:123] Successfully loaded all cached images
	I0626 20:46:52.786474   47309 cache_images.go:92] LoadImages completed in 18.320847233s
	I0626 20:46:52.786562   47309 ssh_runner.go:195] Run: crio config
	I0626 20:46:52.857805   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:46:52.857833   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:52.857849   47309 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:52.857871   47309 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934450 NodeName:no-preload-934450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:52.858035   47309 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934450"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:52.858115   47309 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:52.858172   47309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:52.867179   47309 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:52.867253   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:52.875412   47309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 20:46:52.891376   47309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:52.906859   47309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0626 20:46:52.924927   47309 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:52.929059   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:52.942789   47309 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450 for IP: 192.168.50.38
	I0626 20:46:52.942825   47309 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:52.943011   47309 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:52.943059   47309 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:52.943138   47309 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.key
	I0626 20:46:52.943195   47309 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key.01da567d
	I0626 20:46:52.943236   47309 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key
	I0626 20:46:52.943341   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:52.943376   47309 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:52.943396   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:52.943435   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:52.943472   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:52.943509   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:52.943551   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.944147   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:52.971630   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:52.997892   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:53.024951   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 20:46:53.048462   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:53.075077   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:53.100318   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:53.129545   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:53.162187   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:53.191304   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:53.216166   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:53.240182   47309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:53.256447   47309 ssh_runner.go:195] Run: openssl version
	I0626 20:46:53.262053   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:53.272163   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277028   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277084   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.282611   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:53.296039   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:53.306923   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312778   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312825   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.320244   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:53.334066   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:53.347662   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353665   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353725   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.361150   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:53.374846   47309 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:53.380462   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:53.387949   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:53.393690   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:53.399208   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:53.405073   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:53.411265   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:53.417798   47309 kubeadm.go:404] StartCluster: {Name:no-preload-934450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiN
odeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:53.417916   47309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:53.417950   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:53.451231   47309 cri.go:89] found id: ""
	I0626 20:46:53.451307   47309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:53.460716   47309 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:53.460737   47309 kubeadm.go:636] restartCluster start
	I0626 20:46:53.460790   47309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:53.470518   47309 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.471961   47309 kubeconfig.go:92] found "no-preload-934450" server: "https://192.168.50.38:8443"
	I0626 20:46:53.475433   47309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:53.484054   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.484108   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:53.497348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.998070   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.998129   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.010119   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.498134   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.498223   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.512223   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.997432   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.997520   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.015317   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.497435   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.497516   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.512591   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.998180   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.998251   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.013135   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:56.497483   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.497573   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.512714   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.116295   47605 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:54.116360   47605 ssh_runner.go:195] Run: which lz4
	I0626 20:46:54.120344   47605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:46:54.124462   47605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:46:54.124490   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:46:55.959041   47605 crio.go:444] Took 1.838722 seconds to copy over tarball
	I0626 20:46:55.959115   47605 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:46:59.019532   47605 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060382374s)
	I0626 20:46:59.019555   47605 crio.go:451] Took 3.060486 seconds to extract the tarball
	I0626 20:46:59.019562   47605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:46:59.058687   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:59.102812   47605 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:46:59.102833   47605 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:46:59.102896   47605 ssh_runner.go:195] Run: crio config
	I0626 20:46:55.386479   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.386986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.387014   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:55.386901   48355 retry.go:31] will retry after 710.798834ms: waiting for machine to come up
	I0626 20:46:56.099580   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100079   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100112   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:56.100023   48355 retry.go:31] will retry after 921.187154ms: waiting for machine to come up
	I0626 20:46:57.022481   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022914   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.022859   48355 retry.go:31] will retry after 914.232442ms: waiting for machine to come up
	I0626 20:46:57.938375   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938823   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938845   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.938807   48355 retry.go:31] will retry after 1.411011331s: waiting for machine to come up
	I0626 20:46:59.351697   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352133   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352169   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:59.352076   48355 retry.go:31] will retry after 1.830031795s: waiting for machine to come up
	I0626 20:46:56.997450   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.997518   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.009310   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.497847   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.497929   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.513061   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.997474   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.997553   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.012610   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.498200   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.498274   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.513410   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.997938   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.998022   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.013357   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.497503   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.497581   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.514354   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.997445   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.997531   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.008894   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.497471   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.497555   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.508635   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.998326   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.998429   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.009836   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.498479   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.498593   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.510348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.159206   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:46:59.159236   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:59.159252   47605 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:59.159286   47605 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-299839 NodeName:embed-certs-299839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:59.159423   47605 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-299839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:59.159484   47605 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-299839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:59.159540   47605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:59.168802   47605 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:59.168882   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:59.177994   47605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0626 20:46:59.196041   47605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:59.214092   47605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0626 20:46:59.235187   47605 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:59.239440   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:59.251723   47605 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839 for IP: 192.168.39.51
	I0626 20:46:59.251772   47605 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:59.251943   47605 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:59.252017   47605 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:59.252134   47605 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/client.key
	I0626 20:46:59.252381   47605 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key.be9c3c95
	I0626 20:46:59.252482   47605 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key
	I0626 20:46:59.252626   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:59.252667   47605 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:59.252682   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:59.252718   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:59.252748   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:59.252805   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:59.252868   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:59.253555   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:59.280222   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:59.306244   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:59.331876   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:46:59.358710   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:59.385239   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:59.408963   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:59.433684   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:59.457235   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:59.480565   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:59.507918   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:59.532762   47605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:59.551283   47605 ssh_runner.go:195] Run: openssl version
	I0626 20:46:59.557079   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:59.568335   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573129   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573187   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.579116   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:59.589952   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:59.600935   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605668   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605735   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.611234   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:59.622615   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:59.633737   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638884   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638962   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.644559   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:59.655653   47605 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:59.660632   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:59.666672   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:59.672628   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:59.679194   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:59.685197   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:59.691190   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:59.697063   47605 kubeadm.go:404] StartCluster: {Name:embed-certs-299839 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:59.697146   47605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:59.697191   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:59.731197   47605 cri.go:89] found id: ""
	I0626 20:46:59.731256   47605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:59.741949   47605 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:59.741968   47605 kubeadm.go:636] restartCluster start
	I0626 20:46:59.742023   47605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:59.751837   47605 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.753347   47605 kubeconfig.go:92] found "embed-certs-299839" server: "https://192.168.39.51:8443"
	I0626 20:46:59.756955   47605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:59.766951   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.767023   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.779343   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.280064   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.280149   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.293730   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.780264   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.780347   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.793352   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.279827   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.279911   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.292843   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.779409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.779513   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.793293   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.279814   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.279902   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.296345   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.779892   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.779980   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.796346   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.280342   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.280417   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.292883   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.780156   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.780232   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.792667   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.184295   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184668   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184694   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:01.184605   48355 retry.go:31] will retry after 2.248796967s: waiting for machine to come up
	I0626 20:47:03.435559   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436054   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436086   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:03.435982   48355 retry.go:31] will retry after 2.012102985s: waiting for machine to come up
	I0626 20:47:01.998275   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.998353   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.014217   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.497731   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.497824   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.509505   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.998119   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.998202   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.009348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.485111   47309 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:03.485154   47309 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:03.485167   47309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:03.485216   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:03.516791   47309 cri.go:89] found id: ""
	I0626 20:47:03.516868   47309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:03.531523   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:03.540694   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:03.540761   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549498   47309 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549525   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:03.687202   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.779117   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.091878038s)
	I0626 20:47:04.779156   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.983470   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.059963   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.136199   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:05.136282   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:05.663265   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:06.163057   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:04.280330   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.280447   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.292565   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:04.780127   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.780225   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.797554   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.279900   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.279986   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.297853   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.779501   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.779594   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.794314   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.279916   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.280001   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.296829   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.779473   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.779566   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.793302   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.279802   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.279888   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.292407   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.779813   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.779914   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.793591   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.279846   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.279935   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.292196   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.779753   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.779859   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.792362   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.450681   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451186   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451216   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:05.451117   48355 retry.go:31] will retry after 3.442192384s: waiting for machine to come up
	I0626 20:47:08.895024   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:08.895520   48355 retry.go:31] will retry after 4.272351839s: waiting for machine to come up
	I0626 20:47:06.662926   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.163275   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.662871   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.689321   47309 api_server.go:72] duration metric: took 2.55312002s to wait for apiserver process to appear ...
	I0626 20:47:07.689348   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:07.689366   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:10.879412   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:10.879439   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:11.379823   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.386705   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.386736   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:11.880574   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.892733   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.892768   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:12.380392   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:12.389894   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:47:12.400274   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:12.400307   47309 api_server.go:131] duration metric: took 4.710951407s to wait for apiserver health ...
	I0626 20:47:12.400320   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:47:12.400332   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:12.402355   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:09.280409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:09.280512   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:09.293009   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:09.767593   47605 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:09.767636   47605 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:09.767648   47605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:09.767705   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:09.800380   47605 cri.go:89] found id: ""
	I0626 20:47:09.800465   47605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:09.819239   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:09.830482   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:09.830547   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840424   47605 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840451   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:09.979898   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.746785   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.960847   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.041569   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.122238   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:11.122322   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:11.640034   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.140386   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.640370   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.139901   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.639546   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.663848   47605 api_server.go:72] duration metric: took 2.54160148s to wait for apiserver process to appear ...
	I0626 20:47:13.663874   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:13.663905   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:14.587552   46683 start.go:369] acquired machines lock for "old-k8s-version-490377" in 55.268521785s
	I0626 20:47:14.587610   46683 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:47:14.587622   46683 fix.go:54] fixHost starting: 
	I0626 20:47:14.588035   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:47:14.588074   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:47:14.607186   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0626 20:47:14.607765   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:47:14.608361   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:47:14.608384   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:47:14.608697   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:47:14.608908   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:14.609056   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:47:14.610765   46683 fix.go:102] recreateIfNeeded on old-k8s-version-490377: state=Stopped err=<nil>
	I0626 20:47:14.610791   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	W0626 20:47:14.611905   46683 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:47:14.613885   46683 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490377" ...
	I0626 20:47:13.169996   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.170568   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Found IP for machine: 192.168.61.238
	I0626 20:47:13.170601   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserving static IP address...
	I0626 20:47:13.170622   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has current primary IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.171048   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.171080   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserved static IP address: 192.168.61.238
	I0626 20:47:13.171107   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | skip adding static IP to network mk-default-k8s-diff-port-473235 - found existing host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"}
	I0626 20:47:13.171128   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Getting to WaitForSSH function...
	I0626 20:47:13.171141   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for SSH to be available...
	I0626 20:47:13.173755   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174235   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.174265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174442   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH client type: external
	I0626 20:47:13.174485   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa (-rw-------)
	I0626 20:47:13.174518   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:13.174538   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | About to run SSH command:
	I0626 20:47:13.174553   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | exit 0
	I0626 20:47:13.265799   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:13.266189   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetConfigRaw
	I0626 20:47:13.266850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.269749   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270212   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.270253   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270498   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:47:13.270732   47779 machine.go:88] provisioning docker machine ...
	I0626 20:47:13.270758   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:13.270959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271112   47779 buildroot.go:166] provisioning hostname "default-k8s-diff-port-473235"
	I0626 20:47:13.271134   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.273679   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274087   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.274135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274273   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.274446   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274618   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274747   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.274940   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.275353   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.275369   47779 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-473235 && echo "default-k8s-diff-port-473235" | sudo tee /etc/hostname
	I0626 20:47:13.416565   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-473235
	
	I0626 20:47:13.416595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.420132   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420596   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.420670   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.421172   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421392   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.421821   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.422425   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.422457   47779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-473235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-473235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-473235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:13.566095   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:13.566131   47779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:13.566175   47779 buildroot.go:174] setting up certificates
	I0626 20:47:13.566192   47779 provision.go:83] configureAuth start
	I0626 20:47:13.566206   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.566509   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.569795   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570251   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.570283   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570476   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.573020   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573439   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.573475   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573704   47779 provision.go:138] copyHostCerts
	I0626 20:47:13.573782   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:13.573795   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:13.573859   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:13.573976   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:13.573987   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:13.574016   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:13.574094   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:13.574108   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:13.574134   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:13.574199   47779 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-473235 san=[192.168.61.238 192.168.61.238 localhost 127.0.0.1 minikube default-k8s-diff-port-473235]
	I0626 20:47:13.795155   47779 provision.go:172] copyRemoteCerts
	I0626 20:47:13.795207   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:13.795230   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.798039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798457   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.798512   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798706   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.798918   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.799130   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.799274   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:13.892185   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:13.921840   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 20:47:13.951311   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:13.980185   47779 provision.go:86] duration metric: configureAuth took 413.976937ms
	I0626 20:47:13.980216   47779 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:13.980460   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:47:13.980551   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.983814   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984217   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.984265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984604   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.984826   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985010   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985144   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.985344   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.985947   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.985979   47779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:14.317679   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:14.317702   47779 machine.go:91] provisioned docker machine in 1.046953094s
	I0626 20:47:14.317713   47779 start.go:300] post-start starting for "default-k8s-diff-port-473235" (driver="kvm2")
	I0626 20:47:14.317723   47779 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:14.317744   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.318064   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:14.318101   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.321001   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321358   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.321408   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321598   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.321806   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.321986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.322139   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.414722   47779 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:14.419797   47779 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:14.419822   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:14.419895   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:14.419990   47779 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:14.420118   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:14.430766   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:14.458086   47779 start.go:303] post-start completed in 140.355388ms
	I0626 20:47:14.458107   47779 fix.go:56] fixHost completed within 21.823695632s
	I0626 20:47:14.458125   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.460953   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461277   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.461308   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461472   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.461651   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.461841   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.462025   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.462175   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:14.462805   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:14.462823   47779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:14.587374   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812434.534091475
	
	I0626 20:47:14.587395   47779 fix.go:206] guest clock: 1687812434.534091475
	I0626 20:47:14.587403   47779 fix.go:219] Guest: 2023-06-26 20:47:14.534091475 +0000 UTC Remote: 2023-06-26 20:47:14.458110543 +0000 UTC m=+159.266861615 (delta=75.980932ms)
	I0626 20:47:14.587446   47779 fix.go:190] guest clock delta is within tolerance: 75.980932ms
	I0626 20:47:14.587456   47779 start.go:83] releasing machines lock for "default-k8s-diff-port-473235", held for 21.953095935s
	I0626 20:47:14.587492   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.587776   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:14.590654   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591111   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.591143   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591332   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.591869   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592074   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592151   47779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:14.592205   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.592451   47779 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:14.592489   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.595039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595271   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595585   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595615   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595659   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595698   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595901   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596076   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596118   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596311   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596344   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596466   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.596622   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.683637   47779 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:14.713738   47779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:14.869873   47779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:14.877719   47779 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:14.877815   47779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:14.893656   47779 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:14.893682   47779 start.go:466] detecting cgroup driver to use...
	I0626 20:47:14.893738   47779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:14.908419   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:14.921730   47779 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:14.921812   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:14.940659   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:14.955010   47779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:15.062849   47779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:15.193682   47779 docker.go:212] disabling docker service ...
	I0626 20:47:15.193810   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:15.210855   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:15.223362   47779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:15.348648   47779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:15.471398   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:15.496137   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:15.523967   47779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:47:15.524041   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.537188   47779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:15.537258   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.550404   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.563577   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.574958   47779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:15.588685   47779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:15.600611   47779 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:15.600680   47779 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:15.615658   47779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:15.628004   47779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:15.763410   47779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:15.982719   47779 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:15.982799   47779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:15.990799   47779 start.go:534] Will wait 60s for crictl version
	I0626 20:47:15.990864   47779 ssh_runner.go:195] Run: which crictl
	I0626 20:47:15.997709   47779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:16.041802   47779 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:16.041893   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.094989   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.151324   47779 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:47:12.403841   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:12.420028   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:12.459593   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:12.486209   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:12.486256   47309 system_pods.go:61] "coredns-5d78c9869d-dwkng" [8919aa0b-b8b6-4672-aa75-ea5ea1d27ef6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:12.486270   47309 system_pods.go:61] "etcd-no-preload-934450" [67a1367b-dc99-4613-8a75-796a64f13f0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:12.486281   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [7452cf79-3e8f-4dce-922a-a52115c7059f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:12.486291   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [a3393645-4d3d-4fab-a32f-c15ff3bfcdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:12.486300   47309 system_pods.go:61] "kube-proxy-phrv2" [d08fdd52-cc2a-43cb-84c4-170ad241527e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:12.486310   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [cc1c89f8-925a-4847-b693-08fbc4905119] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:12.486319   47309 system_pods.go:61] "metrics-server-74d5c6b9c-7szm5" [d94c68f7-4521-4366-b5db-38f420a78dd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:12.486331   47309 system_pods.go:61] "storage-provisioner" [7aa74f96-c306-4d70-a211-715b4877b15b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:12.486341   47309 system_pods.go:74] duration metric: took 26.722879ms to wait for pod list to return data ...
	I0626 20:47:12.486359   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:12.490745   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:12.490784   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:12.490809   47309 node_conditions.go:105] duration metric: took 4.437855ms to run NodePressure ...
	I0626 20:47:12.490830   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:12.794912   47309 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800827   47309 kubeadm.go:787] kubelet initialised
	I0626 20:47:12.800855   47309 kubeadm.go:788] duration metric: took 5.915334ms waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800865   47309 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:12.807162   47309 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:14.822450   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:14.614985   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Start
	I0626 20:47:14.615159   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring networks are active...
	I0626 20:47:14.615866   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network default is active
	I0626 20:47:14.616331   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network mk-old-k8s-version-490377 is active
	I0626 20:47:14.616785   46683 main.go:141] libmachine: (old-k8s-version-490377) Getting domain xml...
	I0626 20:47:14.617507   46683 main.go:141] libmachine: (old-k8s-version-490377) Creating domain...
	I0626 20:47:16.055502   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting to get IP...
	I0626 20:47:16.056448   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.056913   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.057009   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.056935   48478 retry.go:31] will retry after 281.770624ms: waiting for machine to come up
	I0626 20:47:16.340685   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.341472   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.341496   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.341268   48478 retry.go:31] will retry after 249.185886ms: waiting for machine to come up
	I0626 20:47:16.591867   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.592547   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.592718   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.592671   48478 retry.go:31] will retry after 327.814159ms: waiting for machine to come up
	I0626 20:47:17.910025   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:17.910061   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:18.411167   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.425310   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.425345   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:18.910567   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.920897   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.920933   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:19.410736   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:19.418228   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:47:19.428516   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:19.428551   47605 api_server.go:131] duration metric: took 5.764669652s to wait for apiserver health ...
	I0626 20:47:19.428561   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:47:19.428573   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:19.430711   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:16.152563   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:16.156250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156617   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:16.156644   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156894   47779 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:16.162480   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:16.180283   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:47:16.180336   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:16.227399   47779 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:47:16.227474   47779 ssh_runner.go:195] Run: which lz4
	I0626 20:47:16.233720   47779 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:16.240423   47779 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:16.240463   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:47:18.263416   47779 crio.go:444] Took 2.029753 seconds to copy over tarball
	I0626 20:47:18.263515   47779 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:16.837607   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:19.361799   47309 pod_ready.go:92] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.361869   47309 pod_ready.go:81] duration metric: took 6.554677083s waiting for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.361886   47309 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370122   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.370145   47309 pod_ready.go:81] duration metric: took 8.249243ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370157   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391052   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:21.391082   47309 pod_ready.go:81] duration metric: took 2.020917194s waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391096   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:16.922381   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.922923   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.922952   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.922873   48478 retry.go:31] will retry after 486.21568ms: waiting for machine to come up
	I0626 20:47:17.410676   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:17.411282   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:17.411305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:17.411227   48478 retry.go:31] will retry after 606.277374ms: waiting for machine to come up
	I0626 20:47:18.020296   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.021367   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.021400   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.021287   48478 retry.go:31] will retry after 576.843487ms: waiting for machine to come up
	I0626 20:47:18.599674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.600326   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.600352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.600221   48478 retry.go:31] will retry after 857.329718ms: waiting for machine to come up
	I0626 20:47:19.459545   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:19.460101   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:19.460125   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:19.460050   48478 retry.go:31] will retry after 1.017747035s: waiting for machine to come up
	I0626 20:47:20.479538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:20.480140   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:20.480178   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:20.480043   48478 retry.go:31] will retry after 1.379789146s: waiting for machine to come up
	I0626 20:47:19.432325   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:19.461944   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:19.498519   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:19.512703   47605 system_pods.go:59] 9 kube-system pods found
	I0626 20:47:19.512831   47605 system_pods.go:61] "coredns-5d78c9869d-dz48f" [87a67e95-a071-4865-902b-0e401e852456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512860   47605 system_pods.go:61] "coredns-5d78c9869d-lbfsr" [adee7e6b-88b2-412e-bb2d-fc0939bca149] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512905   47605 system_pods.go:61] "etcd-embed-certs-299839" [8aefd012-6a54-4e75-afc9-cc8385212eb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:19.512937   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [e178b5e8-445c-444f-965e-051233c2fa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:19.512971   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [e965e4af-a673-4b93-bb63-e7bfc0f9514d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:19.512995   47605 system_pods.go:61] "kube-proxy-q5khr" [6c11d667-3490-4417-8e0c-373fe25d06b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:19.513014   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [0385958c-3f22-4eb8-bdac-cbaeb52fe9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:19.513050   47605 system_pods.go:61] "metrics-server-74d5c6b9c-gb6b2" [b5a15d68-23ee-4274-a147-db6f2eef97e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:19.513074   47605 system_pods.go:61] "storage-provisioner" [42bd8483-f594-4bf9-8c32-9688d1d99523] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:19.513093   47605 system_pods.go:74] duration metric: took 14.550735ms to wait for pod list to return data ...
	I0626 20:47:19.513125   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:19.519356   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:19.519455   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:19.519513   47605 node_conditions.go:105] duration metric: took 6.36764ms to run NodePressure ...
	I0626 20:47:19.519573   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:19.935407   47605 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943592   47605 kubeadm.go:787] kubelet initialised
	I0626 20:47:19.943622   47605 kubeadm.go:788] duration metric: took 8.187833ms waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943633   47605 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:19.951319   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.957985   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958016   47605 pod_ready.go:81] duration metric: took 6.605612ms waiting for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.958027   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958037   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.965229   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965312   47605 pod_ready.go:81] duration metric: took 7.251456ms waiting for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.965335   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965391   47605 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:22.010596   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:21.752755   47779 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.48920102s)
	I0626 20:47:21.752790   47779 crio.go:451] Took 3.489344 seconds to extract the tarball
	I0626 20:47:21.752802   47779 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:21.800026   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:21.844486   47779 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:47:21.844504   47779 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:47:21.844573   47779 ssh_runner.go:195] Run: crio config
	I0626 20:47:21.924367   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:21.924397   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:21.924411   47779 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:21.924431   47779 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-473235 NodeName:default-k8s-diff-port-473235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:47:21.924593   47779 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-473235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:21.924685   47779 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-473235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0626 20:47:21.924756   47779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:47:21.934851   47779 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:21.934951   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:21.944791   47779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0626 20:47:21.963087   47779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:21.981936   47779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0626 20:47:22.002207   47779 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:22.006443   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:22.019555   47779 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235 for IP: 192.168.61.238
	I0626 20:47:22.019591   47779 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:22.019794   47779 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:22.019859   47779 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:22.019983   47779 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.key
	I0626 20:47:22.020069   47779 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key.761b3e7f
	I0626 20:47:22.020126   47779 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key
	I0626 20:47:22.020257   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:22.020296   47779 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:22.020309   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:22.020340   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:22.020376   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:22.020418   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:22.020475   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:22.021354   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:22.045205   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:22.069269   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:22.092387   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:22.120395   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:22.143199   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:22.167864   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:22.192223   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:22.218085   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:22.243249   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:22.269200   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:22.294015   47779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:22.313139   47779 ssh_runner.go:195] Run: openssl version
	I0626 20:47:22.319998   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:22.330864   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337082   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337144   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.343158   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:22.354507   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:22.366438   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371070   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371127   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.376858   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:22.387928   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:22.398665   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403091   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403139   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.410314   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:22.421729   47779 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:22.426373   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:22.432450   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:22.438093   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:22.446065   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:22.452103   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:22.457940   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:22.464492   47779 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:22.464647   47779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:22.464707   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:22.497723   47779 cri.go:89] found id: ""
	I0626 20:47:22.497803   47779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:22.508914   47779 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:22.508940   47779 kubeadm.go:636] restartCluster start
	I0626 20:47:22.508994   47779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:22.519855   47779 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:22.521400   47779 kubeconfig.go:92] found "default-k8s-diff-port-473235" server: "https://192.168.61.238:8444"
	I0626 20:47:22.525126   47779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:22.536252   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:22.536311   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:22.548698   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.049731   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.049805   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.062575   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.548966   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.549050   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.566351   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.048839   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.048917   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.065016   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.549110   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.549211   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.563150   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:25.049739   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.049828   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.066148   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.496598   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.496624   47309 pod_ready.go:81] duration metric: took 2.105519396s waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.496637   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504045   47309 pod_ready.go:92] pod "kube-proxy-phrv2" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.504067   47309 pod_ready.go:81] duration metric: took 7.42294ms waiting for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504078   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022096   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:25.022123   47309 pod_ready.go:81] duration metric: took 1.518037516s waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022135   47309 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.861798   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:21.981234   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:21.981272   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:21.862292   48478 retry.go:31] will retry after 2.138021733s: waiting for machine to come up
	I0626 20:47:24.002651   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:24.003184   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:24.003215   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:24.003122   48478 retry.go:31] will retry after 2.016131828s: waiting for machine to come up
	I0626 20:47:26.020987   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:26.021487   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:26.021511   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:26.021427   48478 retry.go:31] will retry after 2.317082546s: waiting for machine to come up
	I0626 20:47:24.497636   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:26.997525   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:27.997348   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:27.997394   47605 pod_ready.go:81] duration metric: took 8.031967272s waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:27.997408   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.548979   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.549054   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.566040   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.049569   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.049636   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.061513   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.548864   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.548952   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.566095   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.049674   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.049818   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.067169   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.549748   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.549831   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.568977   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.048852   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.048921   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.064935   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.549510   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.549614   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.562781   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.049396   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.049482   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.063237   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.548762   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.548853   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.561289   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:30.048758   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.048832   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.061079   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.040010   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:29.536317   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.537367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:28.340238   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:28.340738   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:28.340774   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:28.340660   48478 retry.go:31] will retry after 3.9887538s: waiting for machine to come up
	I0626 20:47:30.014224   47605 pod_ready.go:102] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.016636   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.016660   47605 pod_ready.go:81] duration metric: took 3.019245103s waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.016669   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022769   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.022794   47605 pod_ready.go:81] duration metric: took 6.118745ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022806   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.031975   47605 pod_ready.go:92] pod "kube-proxy-q5khr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.032004   47605 pod_ready.go:81] duration metric: took 9.189713ms waiting for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.032015   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040203   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.040231   47605 pod_ready.go:81] duration metric: took 8.207477ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040244   47605 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:33.054175   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:30.549812   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.549897   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.562540   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.049000   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.049071   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.061358   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.549602   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.549664   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.562690   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.049131   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:32.049223   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:32.061951   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.536775   47779 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:32.536827   47779 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:32.536843   47779 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:32.536914   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:32.571353   47779 cri.go:89] found id: ""
	I0626 20:47:32.571434   47779 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:32.588931   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:32.599519   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:32.599585   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610183   47779 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610212   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:32.738386   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.418561   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.612946   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.740311   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.830927   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:33.830992   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.372343   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.872109   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:33.542864   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:36.037521   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:32.332668   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:32.333139   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:32.333169   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:32.333084   48478 retry.go:31] will retry after 3.571549947s: waiting for machine to come up
	I0626 20:47:35.906478   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.906962   46683 main.go:141] libmachine: (old-k8s-version-490377) Found IP for machine: 192.168.72.111
	I0626 20:47:35.906994   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has current primary IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.907004   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserving static IP address...
	I0626 20:47:35.907527   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.907573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | skip adding static IP to network mk-old-k8s-version-490377 - found existing host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"}
	I0626 20:47:35.907588   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserved static IP address: 192.168.72.111
	I0626 20:47:35.907605   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting for SSH to be available...
	I0626 20:47:35.907658   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Getting to WaitForSSH function...
	I0626 20:47:35.909932   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910346   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.910383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH client type: external
	I0626 20:47:35.910573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa (-rw-------)
	I0626 20:47:35.910604   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:35.910620   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | About to run SSH command:
	I0626 20:47:35.910635   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | exit 0
	I0626 20:47:36.006056   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:36.006429   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetConfigRaw
	I0626 20:47:36.007160   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.010144   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010519   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.010551   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010863   46683 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/config.json ...
	I0626 20:47:36.011106   46683 machine.go:88] provisioning docker machine ...
	I0626 20:47:36.011130   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.011366   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011542   46683 buildroot.go:166] provisioning hostname "old-k8s-version-490377"
	I0626 20:47:36.011561   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011705   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.014236   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014643   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.014674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014821   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.015013   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015156   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015371   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.015595   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.016010   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.016029   46683 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490377 && echo "old-k8s-version-490377" | sudo tee /etc/hostname
	I0626 20:47:36.160735   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490377
	
	I0626 20:47:36.160797   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.163857   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164373   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.164425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164566   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.164778   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.164983   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.165128   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.165311   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.166001   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.166030   46683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:36.302740   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:36.302789   46683 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:36.302839   46683 buildroot.go:174] setting up certificates
	I0626 20:47:36.302852   46683 provision.go:83] configureAuth start
	I0626 20:47:36.302868   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.303151   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.305958   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306411   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.306439   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306667   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.309069   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309447   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.309480   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309538   46683 provision.go:138] copyHostCerts
	I0626 20:47:36.309622   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:36.309635   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:36.309702   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:36.309813   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:36.309830   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:36.309868   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:36.309938   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:36.309947   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:36.309970   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:36.310026   46683 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490377 san=[192.168.72.111 192.168.72.111 localhost 127.0.0.1 minikube old-k8s-version-490377]
	I0626 20:47:36.441131   46683 provision.go:172] copyRemoteCerts
	I0626 20:47:36.441183   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:36.441204   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.444557   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445034   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.445067   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445311   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.445540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.445700   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.445857   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:36.542375   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:36.570185   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:47:36.596725   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:36.622954   46683 provision.go:86] duration metric: configureAuth took 320.087643ms
	I0626 20:47:36.622983   46683 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:36.623205   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:47:36.623301   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.626305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626634   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.626666   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626856   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.627048   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627224   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627349   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.627520   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.627929   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.627954   46683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:36.963666   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:36.963695   46683 machine.go:91] provisioned docker machine in 952.57418ms
	I0626 20:47:36.963707   46683 start.go:300] post-start starting for "old-k8s-version-490377" (driver="kvm2")
	I0626 20:47:36.963719   46683 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:36.963747   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.964067   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:36.964099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.966948   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.967383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967528   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.967735   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.967900   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.968052   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.070309   46683 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:37.075040   46683 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:37.075064   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:37.075125   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:37.075208   46683 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:37.075306   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:37.086362   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:37.110475   46683 start.go:303] post-start completed in 146.752359ms
	I0626 20:47:37.110502   46683 fix.go:56] fixHost completed within 22.522880386s
	I0626 20:47:37.110525   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.113530   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.113925   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.113961   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.114168   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.114372   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114577   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114730   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.114896   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:37.115549   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:37.115572   46683 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:37.247352   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812457.183569581
	
	I0626 20:47:37.247376   46683 fix.go:206] guest clock: 1687812457.183569581
	I0626 20:47:37.247386   46683 fix.go:219] Guest: 2023-06-26 20:47:37.183569581 +0000 UTC Remote: 2023-06-26 20:47:37.110506986 +0000 UTC m=+360.350082215 (delta=73.062595ms)
	I0626 20:47:37.247410   46683 fix.go:190] guest clock delta is within tolerance: 73.062595ms
	I0626 20:47:37.247416   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 22.659832787s
	I0626 20:47:37.247442   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.247723   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:37.250740   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251154   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.251194   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251316   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.251835   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252015   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252101   46683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:37.252144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.252251   46683 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:37.252273   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.255147   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255231   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255440   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255464   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255584   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.255756   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.255765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255792   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255930   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.255946   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.256080   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.256099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.256206   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.256301   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.370571   46683 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:37.376548   46683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:37.531359   46683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:37.540038   46683 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:37.540104   46683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:37.556531   46683 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:37.556554   46683 start.go:466] detecting cgroup driver to use...
	I0626 20:47:37.556620   46683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:37.574430   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:37.586766   46683 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:37.586829   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:37.599572   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:37.612901   46683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:37.717489   46683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:37.851503   46683 docker.go:212] disabling docker service ...
	I0626 20:47:37.851576   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:37.864932   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:37.877087   46683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:37.990007   46683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:38.107613   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:38.122183   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:38.141502   46683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:47:38.141567   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.152052   46683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:38.152128   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.161786   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.172779   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.182823   46683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:38.192695   46683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:38.201322   46683 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:38.201404   46683 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:38.213549   46683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:38.225080   46683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:38.336249   46683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:38.508323   46683 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:38.508443   46683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:38.514430   46683 start.go:534] Will wait 60s for crictl version
	I0626 20:47:38.514496   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:38.518918   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:38.559642   46683 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:38.559731   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.616720   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.678573   46683 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0626 20:47:35.555132   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.053446   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:35.373039   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.872006   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.895929   47779 api_server.go:72] duration metric: took 2.064992302s to wait for apiserver process to appear ...
	I0626 20:47:35.895959   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:35.895982   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:35.896602   47779 api_server.go:269] stopped: https://192.168.61.238:8444/healthz: Get "https://192.168.61.238:8444/healthz": dial tcp 192.168.61.238:8444: connect: connection refused
	I0626 20:47:36.397305   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.868801   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.868839   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.868854   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.907251   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.907280   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.907310   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.921394   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.921428   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:40.397045   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.405040   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.405071   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:40.897690   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.904374   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.904424   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:41.396883   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:41.404743   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:47:41.420191   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:41.420219   47779 api_server.go:131] duration metric: took 5.524252602s to wait for apiserver health ...
	I0626 20:47:41.420231   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:41.420249   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:41.422187   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:38.537628   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:40.538267   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.680019   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:38.682934   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683263   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:38.683294   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683534   46683 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:38.687976   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:38.701534   46683 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 20:47:38.701610   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:38.739497   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:38.739584   46683 ssh_runner.go:195] Run: which lz4
	I0626 20:47:38.744080   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:38.748755   46683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:38.748792   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0626 20:47:40.654759   46683 crio.go:444] Took 1.910714 seconds to copy over tarball
	I0626 20:47:40.654830   46683 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:40.057751   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:42.555707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:41.423617   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:41.447117   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:41.485897   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:41.505667   47779 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:41.505714   47779 system_pods.go:61] "coredns-5d78c9869d-78zrr" [2927dce3-aa13-4ed4-b5a4-bc1b101ec044] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:41.505730   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [5bbba401-cfdd-4e97-ac44-3d1410344b23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:41.505742   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [90d064bc-d31f-4690-b100-8979cdd518c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:41.505755   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [3f686efe-3c90-42ed-a1b9-2cda3e7e49b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:41.505773   47779 system_pods.go:61] "kube-proxy-7t2dk" [bebeb55d-8c7d-4543-9ee1-adbd946904f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:41.505786   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [c2436cf6-0128-425c-9db3-b3d01e5fb5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:41.505799   47779 system_pods.go:61] "metrics-server-74d5c6b9c-swcxn" [81e42c6b-4c7d-40b1-bd4a-ccf7ce2dea17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:41.505811   47779 system_pods.go:61] "storage-provisioner" [18d1c7dc-00a6-4842-b441-f3468adde4ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:41.505822   47779 system_pods.go:74] duration metric: took 19.895923ms to wait for pod list to return data ...
	I0626 20:47:41.505833   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:41.515165   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:41.515201   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:41.515215   47779 node_conditions.go:105] duration metric: took 9.372368ms to run NodePressure ...
	I0626 20:47:41.515243   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:41.848353   47779 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854780   47779 kubeadm.go:787] kubelet initialised
	I0626 20:47:41.854805   47779 kubeadm.go:788] duration metric: took 6.420882ms waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854814   47779 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:41.861323   47779 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.867181   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867214   47779 pod_ready.go:81] duration metric: took 5.86597ms waiting for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.867225   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867235   47779 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.872900   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872928   47779 pod_ready.go:81] duration metric: took 5.684109ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.872940   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872948   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.881471   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881501   47779 pod_ready.go:81] duration metric: took 8.543041ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.881513   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881531   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.892246   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892292   47779 pod_ready.go:81] duration metric: took 10.741136ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.892310   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892325   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297272   47779 pod_ready.go:92] pod "kube-proxy-7t2dk" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:43.297299   47779 pod_ready.go:81] duration metric: took 1.404965565s waiting for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297308   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:42.544224   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.846930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.389432   46683 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.73456858s)
	I0626 20:47:44.389462   46683 crio.go:451] Took 3.734677 seconds to extract the tarball
	I0626 20:47:44.389480   46683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:44.438169   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:44.478220   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:44.478250   46683 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:47:44.478337   46683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.478364   46683 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.478383   46683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.478384   46683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.478450   46683 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0626 20:47:44.478365   46683 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.478345   46683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.478339   46683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479752   46683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.479758   46683 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.479760   46683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.479759   46683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.479748   46683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.479802   46683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.479810   46683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479817   46683 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0626 20:47:44.681554   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720619   46683 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0626 20:47:44.720677   46683 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720730   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.724810   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.753258   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0626 20:47:44.765072   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.767167   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.768723   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0626 20:47:44.769466   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.769474   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.807428   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.904206   46683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0626 20:47:44.904243   46683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0626 20:47:44.904250   46683 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.904261   46683 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926166   46683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0626 20:47:44.926203   46683 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.926204   46683 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0626 20:47:44.926222   46683 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.926222   46683 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0626 20:47:44.926248   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926247   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926251   46683 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0626 20:47:44.926365   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936135   46683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0626 20:47:44.936175   46683 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.936236   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936252   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.936274   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.940272   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.940352   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0626 20:47:44.940409   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.952147   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:45.031640   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0626 20:47:45.031677   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0626 20:47:45.061947   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0626 20:47:45.062070   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0626 20:47:45.062166   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0626 20:47:45.062261   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.062279   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0626 20:47:45.067511   46683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0626 20:47:45.067561   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0626 20:47:45.094726   46683 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.094780   46683 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.384887   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:45.947601   46683 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0626 20:47:45.947707   46683 cache_images.go:92] LoadImages completed in 1.469441722s
	W0626 20:47:45.947778   46683 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0626 20:47:45.947863   46683 ssh_runner.go:195] Run: crio config
	I0626 20:47:46.009928   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:47:46.009955   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:46.009968   46683 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:46.009987   46683 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490377 NodeName:old-k8s-version-490377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 20:47:46.010140   46683 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490377"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-490377
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.111:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:46.010224   46683 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490377 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:47:46.010284   46683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0626 20:47:46.023111   46683 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:46.023196   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:46.034988   46683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0626 20:47:46.056824   46683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:46.077802   46683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0626 20:47:46.102465   46683 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:46.107391   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:46.121242   46683 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377 for IP: 192.168.72.111
	I0626 20:47:46.121277   46683 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:46.121466   46683 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:46.121520   46683 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:46.121635   46683 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.key
	I0626 20:47:46.121735   46683 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key.760f2aeb
	I0626 20:47:46.121789   46683 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key
	I0626 20:47:46.121928   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:46.121970   46683 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:46.121985   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:46.122024   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:46.122063   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:46.122098   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:46.122158   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:46.123026   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:46.149101   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:46.179305   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:46.207421   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:46.233407   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:46.259148   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:46.284728   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:46.312152   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:46.341061   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:46.370455   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:46.398160   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:46.424710   46683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:46.446379   46683 ssh_runner.go:195] Run: openssl version
	I0626 20:47:46.452825   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:46.466808   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472676   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472760   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.479077   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:46.490061   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:46.501801   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.506966   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.507034   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.513146   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:46.523600   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:46.534659   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540612   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540677   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.548499   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:46.562786   46683 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:46.569679   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:46.576129   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:46.582331   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:46.588334   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:46.595635   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:46.603058   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:46.611126   46683 kubeadm.go:404] StartCluster: {Name:old-k8s-version-490377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:46.611211   46683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:46.611277   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:46.650099   46683 cri.go:89] found id: ""
	I0626 20:47:46.650177   46683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:46.660940   46683 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:46.660964   46683 kubeadm.go:636] restartCluster start
	I0626 20:47:46.661022   46683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:46.671400   46683 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:46.672450   46683 kubeconfig.go:92] found "old-k8s-version-490377" server: "https://192.168.72.111:8443"
	I0626 20:47:46.675477   46683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:46.684496   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:46.684568   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:46.695719   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:45.056085   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.554295   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:45.865956   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:48.003697   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.505286   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:49.505314   47779 pod_ready.go:81] duration metric: took 6.207998312s waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:49.505328   47779 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:47.037142   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.037207   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.535460   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.196149   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.196252   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.211751   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:47.696286   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.696381   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.707472   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.195967   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.196041   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.207809   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.696375   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.696449   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.708571   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.196097   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.196176   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.207717   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.696692   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.696768   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.708954   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.196531   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.196611   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.209111   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.696563   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.696648   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.708744   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.196237   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.196305   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.207654   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.695908   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.695988   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.708029   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.056186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.057083   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.519442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.520019   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.536833   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.036673   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.196170   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.196233   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.208953   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:52.696518   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.696600   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.707537   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.196046   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.196113   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.207272   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.695791   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.695873   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.706845   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.196452   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.196530   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.208048   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.696169   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.696236   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.707640   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.195889   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.195968   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.207560   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.695899   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.695978   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.707573   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.195900   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:56.195973   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:56.207335   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.685138   46683 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:56.685165   46683 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:56.685180   46683 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:56.685239   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:56.719427   46683 cri.go:89] found id: ""
	I0626 20:47:56.719494   46683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:56.735328   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:56.747355   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:56.747420   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756129   46683 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756156   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:54.554213   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:57.052902   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:59.055349   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.018337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.025514   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.039195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.538216   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.883656   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.423073   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.641018   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.751205   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.840521   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:57.840645   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.355178   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.854929   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.355164   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.385611   46683 api_server.go:72] duration metric: took 1.545094971s to wait for apiserver process to appear ...
	I0626 20:47:59.385632   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:59.385650   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:01.553510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.554922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.520442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.021809   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.040767   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.535801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:04.386860   46683 api_server.go:269] stopped: https://192.168.72.111:8443/healthz: Get "https://192.168.72.111:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0626 20:48:04.888001   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:05.958461   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:48:05.958486   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:48:05.958498   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.017029   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.017061   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.387577   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.394038   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.394072   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.887033   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.902891   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.902931   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:07.387632   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:07.393827   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:48:07.402591   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:48:07.402618   46683 api_server.go:131] duration metric: took 8.016980167s to wait for apiserver health ...
	I0626 20:48:07.402628   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:48:07.402639   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:48:07.404494   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:48:06.054185   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:08.055165   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.520306   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.521293   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:10.021358   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.537058   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:09.537801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.405919   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:48:07.416748   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:48:07.436249   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:48:07.445695   46683 system_pods.go:59] 7 kube-system pods found
	I0626 20:48:07.445732   46683 system_pods.go:61] "coredns-5644d7b6d9-5lcxw" [8e1a5fff-55d8-4d32-ae6f-c7694c8b5878] Running
	I0626 20:48:07.445741   46683 system_pods.go:61] "etcd-old-k8s-version-490377" [3fff7ab3-7ac7-4417-b3b8-9794f427c880] Running
	I0626 20:48:07.445750   46683 system_pods.go:61] "kube-apiserver-old-k8s-version-490377" [1b8e6b87-0b15-4586-8133-2dd33ac0b069] Running
	I0626 20:48:07.445771   46683 system_pods.go:61] "kube-controller-manager-old-k8s-version-490377" [2635a03c-884d-4245-a8ef-cb02e14443b8] Running
	I0626 20:48:07.445792   46683 system_pods.go:61] "kube-proxy-64btm" [0a8ee3c6-93a1-4989-94d0-209e8c655a64] Running
	I0626 20:48:07.445805   46683 system_pods.go:61] "kube-scheduler-old-k8s-version-490377" [2a6905a0-4f64-4cab-9b6d-55c708c07f8d] Running
	I0626 20:48:07.445815   46683 system_pods.go:61] "storage-provisioner" [9bf36874-b862-41f9-89d4-2d900adc2003] Running
	I0626 20:48:07.445826   46683 system_pods.go:74] duration metric: took 9.553318ms to wait for pod list to return data ...
	I0626 20:48:07.445836   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:48:07.450777   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:48:07.450816   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:48:07.450831   46683 node_conditions.go:105] duration metric: took 4.985221ms to run NodePressure ...
	I0626 20:48:07.450854   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:48:07.693070   46683 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:48:07.696336   46683 retry.go:31] will retry after 291.332727ms: kubelet not initialised
	I0626 20:48:07.992856   46683 retry.go:31] will retry after 210.561512ms: kubelet not initialised
	I0626 20:48:08.208369   46683 retry.go:31] will retry after 371.110023ms: kubelet not initialised
	I0626 20:48:08.585342   46683 retry.go:31] will retry after 1.199452561s: kubelet not initialised
	I0626 20:48:09.790625   46683 retry.go:31] will retry after 923.734482ms: kubelet not initialised
	I0626 20:48:10.719166   46683 retry.go:31] will retry after 1.019822632s: kubelet not initialised
	I0626 20:48:11.743554   46683 retry.go:31] will retry after 3.253867153s: kubelet not initialised
	I0626 20:48:10.552964   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.554534   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.520923   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.019384   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.036991   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:14.536734   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.002028   46683 retry.go:31] will retry after 2.234934883s: kubelet not initialised
	I0626 20:48:14.556223   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.053741   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.054276   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.021470   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.519794   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.036192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.036285   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:21.037136   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.242709   46683 retry.go:31] will retry after 6.079359776s: kubelet not initialised
	I0626 20:48:21.054851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.553653   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:22.020435   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:24.022102   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.037271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:25.037337   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.328332   46683 retry.go:31] will retry after 12.999865358s: kubelet not initialised
	I0626 20:48:25.553983   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.052253   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:26.518782   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.520217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:27.535792   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:29.536336   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:30.055419   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.553794   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:31.018773   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:33.020048   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:35.021492   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.036513   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:34.037364   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.535663   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.334795   46683 retry.go:31] will retry after 13.541680893s: kubelet not initialised
	I0626 20:48:35.052975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.053634   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.053672   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.519603   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.520279   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:38.536271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:40.536344   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.553411   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.554235   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.520569   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.522354   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:42.536811   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.035291   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.554795   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.053080   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:46.019919   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.021534   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:47.036908   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.537386   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.882566   46683 kubeadm.go:787] kubelet initialised
	I0626 20:48:49.882597   46683 kubeadm.go:788] duration metric: took 42.189498896s waiting for restarted kubelet to initialise ...
	I0626 20:48:49.882608   46683 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:48:49.888018   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894462   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.894488   46683 pod_ready.go:81] duration metric: took 6.438689ms waiting for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894501   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899336   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.899358   46683 pod_ready.go:81] duration metric: took 4.848554ms waiting for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899370   46683 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903574   46683 pod_ready.go:92] pod "etcd-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.903593   46683 pod_ready.go:81] duration metric: took 4.21548ms waiting for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903605   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908052   46683 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.908071   46683 pod_ready.go:81] duration metric: took 4.457812ms waiting for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908091   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281099   46683 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.281124   46683 pod_ready.go:81] duration metric: took 373.02512ms waiting for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281139   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681520   46683 pod_ready.go:92] pod "kube-proxy-64btm" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.681541   46683 pod_ready.go:81] duration metric: took 400.395983ms waiting for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681552   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081638   46683 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:51.081657   46683 pod_ready.go:81] duration metric: took 400.09969ms waiting for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081666   46683 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.053581   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.053802   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:50.520090   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.019821   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.020035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.037008   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.037516   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:56.037585   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.491534   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.989758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.552843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.054370   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.020770   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.520039   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.535930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.536377   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.488491   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.489659   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.552927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.056474   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:01.520560   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.019945   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.536728   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.537724   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.989651   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.989796   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.552707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.553918   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:08.554230   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.520608   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.020075   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:07.036576   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.537071   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.990147   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.489229   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.053576   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:13.054110   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.519744   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.020968   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:12.037949   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.537389   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.989856   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.488429   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.490529   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:15.553553   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.054036   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.519975   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.520288   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:17.036172   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:19.036248   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.036421   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.989943   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.990154   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.553570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.554626   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.020817   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.520602   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.036595   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.038742   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.990299   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:24.994358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.053465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.053635   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.520912   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:28.020413   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.537294   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.489707   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.990957   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.552847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:31.554360   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.052585   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:30.520207   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.521484   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:35.020064   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.035666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.036325   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.535889   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.489468   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.989668   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.556092   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.054617   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:37.519850   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:40.020217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.036499   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.537332   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.992357   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.489925   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.553528   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.052935   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:42.520450   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.520634   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.035299   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.036688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.990255   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.489449   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.553009   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.553560   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:47.018978   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.020289   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.535753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.536227   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.990710   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.490459   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.553710   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.054824   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.520532   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:54.027509   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:52.537108   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.036452   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.989608   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.990105   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.990610   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.552894   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.553520   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:56.519796   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.021401   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.537189   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.537365   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.991065   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.489396   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.053139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.062882   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:01.519625   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:03.520031   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.037036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.988698   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.991107   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.551742   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:06.553955   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.053612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:05.520676   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:08.019671   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:10.021418   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.035613   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.036666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.536861   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.488874   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.490059   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.492236   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.553481   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.054574   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:12.518824   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.519670   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.036399   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.537496   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:13.990228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.488219   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.054609   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.553511   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.519795   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.520535   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:19.037355   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.037964   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.488819   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:20.489536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.053521   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.553922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.021035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.519784   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.535974   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.536845   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:22.988574   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:24.990088   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:26.052017   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.054905   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.520011   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.019323   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.019500   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.537999   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.036187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.488859   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:29.990482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.551701   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.554272   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.019810   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.023728   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.036817   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.042849   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.536415   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.488492   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.491986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:35.053986   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:37.055115   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.520551   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.019307   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:38.537119   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:40.537474   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.991471   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.489241   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.490458   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.552836   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.553914   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:44.052850   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.020033   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.520646   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.036648   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:45.036959   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.990768   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.489482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.053271   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.553811   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.018851   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.021042   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.021254   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:47.536099   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.036995   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.489670   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.990231   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.554677   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.053841   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.520067   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.021727   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.042201   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:54.536260   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.489402   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.492509   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.055031   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.055181   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.521342   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.020905   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.036992   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.037534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:01.538152   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.993709   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.488776   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.555263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.054478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.519672   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:05.020878   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.036330   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.036424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.489742   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.988712   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.555161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.052680   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.055326   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.519641   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.520120   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.536306   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:10.537094   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.988973   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.989715   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.488986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.554973   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.054638   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.019264   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.020253   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.537126   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.037318   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:13.490053   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.988498   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.055193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:18.553665   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.522548   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.020609   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.536765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.037132   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.990230   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.991216   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.555044   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.055590   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:21.520052   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.520574   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:22.038085   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.535549   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.022544   47309 pod_ready.go:81] duration metric: took 4m0.000394525s waiting for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:25.022570   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:25.022598   47309 pod_ready.go:38] duration metric: took 4m12.221722724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:25.022623   47309 kubeadm.go:640] restartCluster took 4m31.561880232s
	W0626 20:51:25.022684   47309 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:25.022722   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:22.489438   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.490731   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.554637   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:27.555070   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.020700   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.520337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.990408   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.990900   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.490197   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:30.053627   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.041205   47605 pod_ready.go:81] duration metric: took 4m0.000945978s waiting for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:31.041235   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:31.041252   47605 pod_ready.go:38] duration metric: took 4m11.097608636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:31.041297   47605 kubeadm.go:640] restartCluster took 4m31.299321581s
	W0626 20:51:31.041365   47605 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:31.041409   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:31.019045   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.022453   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.492871   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.989984   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.520977   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:37.521128   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.021691   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:38.489349   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.989368   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.519812   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:44.520689   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.989461   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:45.491205   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:47.019936   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.506391   47779 pod_ready.go:81] duration metric: took 4m0.001048325s waiting for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:49.506423   47779 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:49.506441   47779 pod_ready.go:38] duration metric: took 4m7.651614118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:49.506483   47779 kubeadm.go:640] restartCluster took 4m26.997522391s
	W0626 20:51:49.506561   47779 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:49.506595   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:47.990134   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.990758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:52.489144   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:54.990008   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:56.650050   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.627303734s)
	I0626 20:51:56.650132   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:51:56.665246   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:51:56.678749   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:51:56.690413   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:51:56.690459   47309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:51:56.757308   47309 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:51:56.757415   47309 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:51:56.915845   47309 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:51:56.916021   47309 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:51:56.916158   47309 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:51:57.137465   47309 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:51:57.139330   47309 out.go:204]   - Generating certificates and keys ...
	I0626 20:51:57.139431   47309 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:51:57.139514   47309 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:51:57.139648   47309 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:51:57.139718   47309 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:51:57.139852   47309 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:51:57.139914   47309 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:51:57.139997   47309 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:51:57.140101   47309 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:51:57.140224   47309 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:51:57.140830   47309 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:51:57.141343   47309 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:51:57.141471   47309 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:51:57.294061   47309 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:51:57.436714   47309 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:51:57.707612   47309 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:51:57.875383   47309 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:51:57.893698   47309 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:51:57.895257   47309 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:51:57.895427   47309 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:51:58.020261   47309 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:51:58.022209   47309 out.go:204]   - Booting up control plane ...
	I0626 20:51:58.022349   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:51:58.023359   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:51:58.024253   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:51:58.026955   47309 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:51:58.032948   47309 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:51:57.489729   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:59.490578   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:01.491617   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:05.539291   47309 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505351 seconds
	I0626 20:52:05.539449   47309 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:05.564127   47309 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:06.097928   47309 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:06.098155   47309 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:06.617147   47309 kubeadm.go:322] [bootstrap-token] Using token: 7fs1fc.9teiyerfkduv7ctw
	I0626 20:52:03.989716   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.489773   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.618462   47309 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:06.618602   47309 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:06.631936   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:06.655354   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:06.662468   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:06.673817   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:06.680979   47309 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:06.717394   47309 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:07.015067   47309 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:07.079315   47309 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:07.079362   47309 kubeadm.go:322] 
	I0626 20:52:07.079450   47309 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:07.079464   47309 kubeadm.go:322] 
	I0626 20:52:07.079544   47309 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:07.079556   47309 kubeadm.go:322] 
	I0626 20:52:07.079597   47309 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:07.079680   47309 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:07.079765   47309 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:07.079782   47309 kubeadm.go:322] 
	I0626 20:52:07.079867   47309 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:07.079880   47309 kubeadm.go:322] 
	I0626 20:52:07.079960   47309 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:07.079971   47309 kubeadm.go:322] 
	I0626 20:52:07.080038   47309 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:07.080123   47309 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:07.080233   47309 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:07.080249   47309 kubeadm.go:322] 
	I0626 20:52:07.080370   47309 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:07.080467   47309 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:07.080481   47309 kubeadm.go:322] 
	I0626 20:52:07.080574   47309 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.080692   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:07.080738   47309 kubeadm.go:322] 	--control-plane 
	I0626 20:52:07.080756   47309 kubeadm.go:322] 
	I0626 20:52:07.080858   47309 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:07.080870   47309 kubeadm.go:322] 
	I0626 20:52:07.080979   47309 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.081124   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:07.082329   47309 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.082353   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:52:07.082369   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:07.084307   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:07.804074   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (36.762635025s)
	I0626 20:52:07.804158   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:07.819772   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:07.830166   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:07.839585   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:07.839633   47605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:08.061341   47605 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.085644   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:07.113105   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:07.158420   47309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:07.158542   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.158590   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=no-preload-934450 minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.637925   47309 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:07.638078   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.262589   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.762326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.262326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.762334   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.262485   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.762376   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:11.262645   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.490810   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:10.990521   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:11.762599   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.262690   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.762512   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.262844   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.762234   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.262587   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.762670   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.262293   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.763106   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:16.263264   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.991151   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:15.489549   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:19.659464   47605 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:19.659534   47605 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:19.659620   47605 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:19.659793   47605 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:19.659913   47605 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:19.659993   47605 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:19.661681   47605 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:19.661770   47605 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:19.661860   47605 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:19.661969   47605 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:19.662065   47605 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:19.662158   47605 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:19.662226   47605 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:19.662321   47605 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:19.662401   47605 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:19.662487   47605 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:19.662595   47605 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:19.662649   47605 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:19.662717   47605 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:19.662779   47605 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:19.662849   47605 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:19.662928   47605 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:19.663014   47605 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:19.663128   47605 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:19.663231   47605 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:19.663286   47605 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:19.663370   47605 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:19.664951   47605 out.go:204]   - Booting up control plane ...
	I0626 20:52:19.665063   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:19.665157   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:19.665246   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:19.665347   47605 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:19.665554   47605 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:19.665662   47605 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504998 seconds
	I0626 20:52:19.665792   47605 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:19.665948   47605 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:19.666027   47605 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:19.666278   47605 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-299839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:19.666360   47605 kubeadm.go:322] [bootstrap-token] Using token: e53kqf.6hnw5p7blg3e1mpb
	I0626 20:52:19.667988   47605 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:19.668104   47605 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:19.668203   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:19.668357   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:19.668500   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:19.668632   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:19.668732   47605 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:19.668890   47605 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:19.668953   47605 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:19.669024   47605 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:19.669042   47605 kubeadm.go:322] 
	I0626 20:52:19.669122   47605 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:19.669135   47605 kubeadm.go:322] 
	I0626 20:52:19.669243   47605 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:19.669253   47605 kubeadm.go:322] 
	I0626 20:52:19.669284   47605 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:19.669392   47605 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:19.669472   47605 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:19.669483   47605 kubeadm.go:322] 
	I0626 20:52:19.669561   47605 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:19.669571   47605 kubeadm.go:322] 
	I0626 20:52:19.669642   47605 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:19.669661   47605 kubeadm.go:322] 
	I0626 20:52:19.669724   47605 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:19.669831   47605 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:19.669941   47605 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:19.669951   47605 kubeadm.go:322] 
	I0626 20:52:19.670055   47605 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:19.670169   47605 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:19.670179   47605 kubeadm.go:322] 
	I0626 20:52:19.670283   47605 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670428   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:19.670469   47605 kubeadm.go:322] 	--control-plane 
	I0626 20:52:19.670484   47605 kubeadm.go:322] 
	I0626 20:52:19.670588   47605 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:19.670603   47605 kubeadm.go:322] 
	I0626 20:52:19.670715   47605 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670850   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:19.670863   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:52:19.670875   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:19.672750   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:16.762961   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.263008   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.762325   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.262618   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.762659   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.262343   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.763023   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.932557   47309 kubeadm.go:1081] duration metric: took 12.774065652s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:19.932647   47309 kubeadm.go:406] StartCluster complete in 5m26.514862376s
	I0626 20:52:19.932687   47309 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.932796   47309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:19.935445   47309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.935820   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:19.936149   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:19.936267   47309 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:19.936369   47309 addons.go:66] Setting storage-provisioner=true in profile "no-preload-934450"
	I0626 20:52:19.936388   47309 addons.go:228] Setting addon storage-provisioner=true in "no-preload-934450"
	W0626 20:52:19.936396   47309 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:19.936453   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.936890   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.936917   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.936996   47309 addons.go:66] Setting default-storageclass=true in profile "no-preload-934450"
	I0626 20:52:19.937022   47309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934450"
	I0626 20:52:19.937178   47309 addons.go:66] Setting metrics-server=true in profile "no-preload-934450"
	I0626 20:52:19.937198   47309 addons.go:228] Setting addon metrics-server=true in "no-preload-934450"
	W0626 20:52:19.937206   47309 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:19.937259   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.937461   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937485   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.937664   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937686   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.956754   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0626 20:52:19.956777   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0626 20:52:19.956923   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0626 20:52:19.957245   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957327   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957473   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957897   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.957918   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958063   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958078   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958217   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958240   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958385   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959001   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.959029   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.959257   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959323   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959523   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.960115   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.960168   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.980739   47309 addons.go:228] Setting addon default-storageclass=true in "no-preload-934450"
	W0626 20:52:19.980887   47309 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:19.980924   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.981308   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.981348   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.982528   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0626 20:52:19.982768   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0626 20:52:19.983398   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984115   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984291   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.984303   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.984767   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985276   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.985294   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.985346   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.985720   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985919   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.987605   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.989810   47309 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:19.991208   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:19.991229   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:19.991248   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:19.989487   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.997528   47309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:19.996110   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:19.996135   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999411   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:19.999436   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999495   47309 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:19.999511   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:19.999532   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.002886   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.003159   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.003321   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.004492   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.004806   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0626 20:52:20.004991   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.005018   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.005189   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.005234   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.005402   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.005568   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.005703   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.005881   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.005899   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.006233   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.006590   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:20.006614   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:20.022796   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0626 20:52:20.023252   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.023827   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.023852   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.024209   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.024425   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:20.026279   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:20.026527   47309 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.026542   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:20.026559   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.029302   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029775   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.029804   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029944   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.030138   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.030321   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.030454   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.331846   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.341298   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:20.352664   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:20.352693   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:20.376961   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:20.420573   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:20.420599   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:20.495388   47309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934450" context rescaled to 1 replicas
	I0626 20:52:20.495436   47309 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:20.497711   47309 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:20.499512   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:20.560528   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:20.560559   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:20.647734   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:21.308936   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.802312904s)
	I0626 20:52:21.309013   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:21.323340   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:21.333741   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:21.346686   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:21.346741   47779 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:21.427299   47779 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:21.427431   47779 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:21.598474   47779 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:21.598609   47779 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:21.598727   47779 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:21.802443   47779 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:17.989506   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:20.002885   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:21.804179   47779 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:21.804277   47779 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:21.804985   47779 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:21.805576   47779 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:21.806465   47779 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:21.807206   47779 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:21.807988   47779 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:21.808775   47779 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:21.809427   47779 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:21.810136   47779 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:21.810809   47779 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:21.811489   47779 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:21.811563   47779 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:22.127084   47779 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:22.371731   47779 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:22.635165   47779 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:22.843347   47779 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:22.866673   47779 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:22.868080   47779 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:22.868259   47779 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:23.015798   47779 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:22.468922   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.137025983s)
	I0626 20:52:22.468974   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.468988   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469285   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469339   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469359   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469390   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469315   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:22.469630   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469649   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469669   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469678   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469900   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469915   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597030   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.255690675s)
	I0626 20:52:23.597078   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.220078989s)
	I0626 20:52:23.597104   47309 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:23.597084   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597131   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597130   47309 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.097584802s)
	I0626 20:52:23.597162   47309 node_ready.go:35] waiting up to 6m0s for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.597463   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597463   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597489   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597499   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597508   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597879   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597931   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597950   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632416   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.984627683s)
	I0626 20:52:23.632472   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632485   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.632907   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.632919   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.632940   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632967   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632982   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.633279   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.633297   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.633307   47309 addons.go:464] Verifying addon metrics-server=true in "no-preload-934450"
	I0626 20:52:23.633353   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.635198   47309 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:19.674407   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:19.702224   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:19.744577   47605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=embed-certs-299839 minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.783628   47605 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:20.149671   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:20.782659   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.283295   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.782574   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.283137   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.782766   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.282641   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.783459   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.017432   47779 out.go:204]   - Booting up control plane ...
	I0626 20:52:23.017573   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:23.019187   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:23.020097   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:23.023559   47779 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:23.025808   47779 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:23.636740   47309 addons.go:499] enable addons completed in 3.700468963s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:23.637657   47309 node_ready.go:49] node "no-preload-934450" has status "Ready":"True"
	I0626 20:52:23.637673   47309 node_ready.go:38] duration metric: took 40.495678ms waiting for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.637684   47309 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:23.676466   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:25.699614   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:22.489080   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.490209   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.282506   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:24.782560   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.282565   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.783022   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.282856   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.783243   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.282657   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.783258   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.282802   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.783019   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.283285   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.782968   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.282489   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.782763   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.283126   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.445729   47605 kubeadm.go:1081] duration metric: took 11.701128618s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:31.445766   47605 kubeadm.go:406] StartCluster complete in 5m31.748710798s
	I0626 20:52:31.445787   47605 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.445873   47605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:31.448427   47605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.448700   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:31.448792   47605 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:31.448866   47605 addons.go:66] Setting storage-provisioner=true in profile "embed-certs-299839"
	I0626 20:52:31.448871   47605 addons.go:66] Setting default-storageclass=true in profile "embed-certs-299839"
	I0626 20:52:31.448884   47605 addons.go:228] Setting addon storage-provisioner=true in "embed-certs-299839"
	I0626 20:52:31.448885   47605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-299839"
	W0626 20:52:31.448892   47605 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:31.448938   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:31.448948   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.448986   47605 addons.go:66] Setting metrics-server=true in profile "embed-certs-299839"
	I0626 20:52:31.449006   47605 addons.go:228] Setting addon metrics-server=true in "embed-certs-299839"
	W0626 20:52:31.449013   47605 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:31.449053   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449762   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450455   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450635   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.450708   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.468787   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0626 20:52:31.469015   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0626 20:52:31.469401   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469497   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469929   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.469947   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470036   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.470073   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470548   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470605   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470723   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0626 20:52:31.470915   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.471202   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.471236   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.471374   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.471846   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.471871   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.481862   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.482471   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.482499   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.492391   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0626 20:52:31.493190   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.493807   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.493833   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.494190   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.494347   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.496376   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.499801   47605 addons.go:228] Setting addon default-storageclass=true in "embed-certs-299839"
	W0626 20:52:31.499822   47605 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:31.499851   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.500224   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.500253   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.506027   47605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:31.507267   47605 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.507286   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:31.507306   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.507954   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0626 20:52:31.508919   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.509350   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.509364   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.509784   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.510070   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.511452   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.513168   47605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:28.196489   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:30.196782   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:26.989644   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:29.488966   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.506536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.511805   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.512430   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.514510   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.514522   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:31.514530   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.514536   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:31.514555   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.514709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.514860   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.515029   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.517249   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517628   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.517653   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517774   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.517948   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.518282   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.518454   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.522114   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0626 20:52:31.522433   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.522982   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.523010   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.523416   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.523984   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.524019   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.545037   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0626 20:52:31.545523   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.546109   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.546140   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.546551   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.546826   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.549289   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.549597   47605 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.549618   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:31.549638   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.553457   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553713   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.553744   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553798   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.553995   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.554131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.554284   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.693230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:31.713818   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.718654   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:31.718682   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:31.734681   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.767394   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:31.767424   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:31.884424   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:31.884443   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:31.961893   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:32.055887   47605 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-299839" context rescaled to 1 replicas
	I0626 20:52:32.055933   47605 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:32.058697   47605 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:32.530480   47779 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.504525 seconds
	I0626 20:52:32.530633   47779 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:32.556112   47779 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:33.096104   47779 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:33.096372   47779 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-473235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:33.615425   47779 kubeadm.go:322] [bootstrap-token] Using token: fvy9dh.hbeabw0ufqdnf2rd
	I0626 20:52:33.617480   47779 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:33.617622   47779 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:33.630158   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:33.641973   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:33.649480   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:33.657736   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:33.663093   47779 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:33.698108   47779 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:34.017843   47779 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:34.069498   47779 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:34.070500   47779 kubeadm.go:322] 
	I0626 20:52:34.070587   47779 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:34.070600   47779 kubeadm.go:322] 
	I0626 20:52:34.070691   47779 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:34.070705   47779 kubeadm.go:322] 
	I0626 20:52:34.070734   47779 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:34.070809   47779 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:34.070915   47779 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:34.070952   47779 kubeadm.go:322] 
	I0626 20:52:34.071047   47779 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:34.071060   47779 kubeadm.go:322] 
	I0626 20:52:34.071114   47779 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:34.071124   47779 kubeadm.go:322] 
	I0626 20:52:34.071183   47779 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:34.071276   47779 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:34.071360   47779 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:34.071369   47779 kubeadm.go:322] 
	I0626 20:52:34.071454   47779 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:34.071550   47779 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:34.071558   47779 kubeadm.go:322] 
	I0626 20:52:34.071677   47779 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.071823   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:34.071852   47779 kubeadm.go:322] 	--control-plane 
	I0626 20:52:34.071860   47779 kubeadm.go:322] 
	I0626 20:52:34.071961   47779 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:34.071973   47779 kubeadm.go:322] 
	I0626 20:52:34.072075   47779 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.072202   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:34.072734   47779 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:34.072775   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:52:34.072794   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:34.074659   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:32.060653   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:33.969636   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.276366101s)
	I0626 20:52:33.969679   47605 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:34.114443   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.400580422s)
	I0626 20:52:34.114587   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114636   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114483   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.379765696s)
	I0626 20:52:34.114695   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114993   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115036   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115049   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.115059   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.115068   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.115386   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115394   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115458   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117682   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.117720   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.117736   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117754   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.117764   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.119184   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.119204   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.119218   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.119238   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.119253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.120750   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.120787   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.120800   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.800635   47605 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.739945617s)
	I0626 20:52:34.800672   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838732117s)
	I0626 20:52:34.800721   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.800740   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.800674   47605 node_ready.go:35] waiting up to 6m0s for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.801059   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.801086   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.801103   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.801112   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.802733   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.802767   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.802781   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.802798   47605 addons.go:464] Verifying addon metrics-server=true in "embed-certs-299839"
	I0626 20:52:34.804616   47605 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:52:34.076233   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:34.097578   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:34.126294   47779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:34.126351   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.126361   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=default-k8s-diff-port-473235 minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.672738   47779 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:34.672886   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:32.196979   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.198202   47309 pod_ready.go:97] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198243   47309 pod_ready.go:81] duration metric: took 10.521748073s waiting for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:34.198256   47309 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198265   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208718   47309 pod_ready.go:92] pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.208751   47309 pod_ready.go:81] duration metric: took 10.474456ms waiting for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208765   47309 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216757   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.216787   47309 pod_ready.go:81] duration metric: took 8.014039ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216800   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226840   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.226862   47309 pod_ready.go:81] duration metric: took 10.054474ms waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226875   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234229   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.234252   47309 pod_ready.go:81] duration metric: took 7.369366ms waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234265   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603958   47309 pod_ready.go:92] pod "kube-proxy-jhz99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.603985   47309 pod_ready.go:81] duration metric: took 369.712585ms waiting for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603999   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.992990   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.993018   47309 pod_ready.go:81] duration metric: took 389.011206ms waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.993033   47309 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:33.991358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:36.489561   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.806005   47605 addons.go:499] enable addons completed in 3.357208024s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:52:34.826098   47605 node_ready.go:49] node "embed-certs-299839" has status "Ready":"True"
	I0626 20:52:34.826123   47605 node_ready.go:38] duration metric: took 25.328707ms waiting for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.826131   47605 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:34.853293   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388894   47605 pod_ready.go:92] pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.388921   47605 pod_ready.go:81] duration metric: took 1.535604079s waiting for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388931   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397936   47605 pod_ready.go:92] pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.397962   47605 pod_ready.go:81] duration metric: took 9.024703ms waiting for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397978   47605 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409066   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.409098   47605 pod_ready.go:81] duration metric: took 11.112746ms waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409111   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419292   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.419313   47605 pod_ready.go:81] duration metric: took 10.193966ms waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419322   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429116   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.429140   47605 pod_ready.go:81] duration metric: took 9.812044ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429154   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316268   47605 pod_ready.go:92] pod "kube-proxy-scfwr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.316318   47605 pod_ready.go:81] duration metric: took 887.155494ms waiting for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316334   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605351   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.605394   47605 pod_ready.go:81] duration metric: took 289.052198ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605409   47605 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:35.287764   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:35.787902   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.287089   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.786922   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.287932   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.787255   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.287820   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.786891   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.287467   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.787282   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.400022   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:39.401566   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:41.404969   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:38.491696   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.990293   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.013927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:42.518436   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.287734   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:40.786949   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.287187   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.787722   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.287098   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.787623   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.287242   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.787224   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.287339   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.787760   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.287273   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.787052   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.287810   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.436665   47779 kubeadm.go:1081] duration metric: took 12.310369141s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:46.436696   47779 kubeadm.go:406] StartCluster complete in 5m23.972219662s
	I0626 20:52:46.436715   47779 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.436798   47779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:46.438623   47779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.438897   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:46.439016   47779 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:46.439110   47779 addons.go:66] Setting storage-provisioner=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439117   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:46.439128   47779 addons.go:66] Setting default-storageclass=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439166   47779 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-473235"
	I0626 20:52:46.439128   47779 addons.go:228] Setting addon storage-provisioner=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439240   47779 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:46.439285   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439133   47779 addons.go:66] Setting metrics-server=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439336   47779 addons.go:228] Setting addon metrics-server=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439346   47779 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:46.439383   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439663   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439691   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439694   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439717   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439733   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439754   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.456038   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0626 20:52:46.456227   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0626 20:52:46.456533   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.456769   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.457072   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457092   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457258   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457280   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457413   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457749   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457902   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.459751   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0626 20:52:46.460296   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.460326   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.460688   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.462951   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.462975   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.463384   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.463981   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.464006   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.477368   47779 addons.go:228] Setting addon default-storageclass=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.477472   47779 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:46.477516   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.477987   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.478063   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.479865   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0626 20:52:46.480358   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.480932   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.480951   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.481335   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.482608   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0626 20:52:46.482630   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.482982   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.483505   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.483521   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.483907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.484123   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.485234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.487634   47779 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:46.486430   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.488916   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:46.488938   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:46.488959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.490698   47779 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:43.900514   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.900540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:43.488701   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.992735   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:46.491860   47779 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.491875   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:46.491893   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.492950   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.493834   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.493855   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.494361   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.494827   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.494987   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.495130   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.496109   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.496170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496192   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.496213   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496294   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.496444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.496549   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.502119   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0626 20:52:46.502456   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.502898   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.502916   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.503203   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.503723   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.503747   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.522597   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0626 20:52:46.523240   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.523892   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.523912   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.524423   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.524674   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.526567   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.528682   47779 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.528699   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:46.528721   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.531983   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532450   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.532477   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532785   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.533992   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.534158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.534302   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.698636   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:46.819666   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.915074   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.918133   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:46.918161   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:47.006856   47779 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-473235" context rescaled to 1 replicas
	I0626 20:52:47.006907   47779 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:47.008746   47779 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:45.013051   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.014722   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.010273   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:47.015003   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:47.015022   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:47.099554   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:47.099583   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:47.162192   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:48.848078   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.149396252s)
	I0626 20:52:48.848110   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.028412306s)
	I0626 20:52:48.848145   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848157   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848112   47779 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:48.848418   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848438   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848440   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848448   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848460   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848678   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848699   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848712   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848715   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848722   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848936   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848959   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.142482   47779 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.13217662s)
	I0626 20:52:49.142522   47779 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.142664   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.227563186s)
	I0626 20:52:49.142706   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.142723   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143018   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143037   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143047   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.143055   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.143309   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143402   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143378   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.230635   47779 node_ready.go:49] node "default-k8s-diff-port-473235" has status "Ready":"True"
	I0626 20:52:49.230663   47779 node_ready.go:38] duration metric: took 88.12938ms waiting for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.230688   47779 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:49.248094   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:49.857182   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694948259s)
	I0626 20:52:49.857243   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857254   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857552   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857569   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857579   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857588   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857816   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857836   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857847   47779 addons.go:464] Verifying addon metrics-server=true in "default-k8s-diff-port-473235"
	I0626 20:52:49.859648   47779 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:49.860902   47779 addons.go:499] enable addons completed in 3.421885216s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:47.901422   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.402347   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:48.490248   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.991228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.082154   46683 pod_ready.go:81] duration metric: took 4m0.000473504s waiting for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:51.082180   46683 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:52:51.082198   46683 pod_ready.go:38] duration metric: took 4m1.199581008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:51.082227   46683 kubeadm.go:640] restartCluster took 5m4.421255564s
	W0626 20:52:51.082286   46683 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:52:51.082319   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:52:50.897742   47779 pod_ready.go:92] pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.897765   47779 pod_ready.go:81] duration metric: took 1.649649958s waiting for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.897777   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.924988   47779 pod_ready.go:92] pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.925007   47779 pod_ready.go:81] duration metric: took 27.222965ms waiting for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.925016   47779 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942760   47779 pod_ready.go:92] pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.942781   47779 pod_ready.go:81] duration metric: took 17.75819ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942790   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956204   47779 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.956224   47779 pod_ready.go:81] duration metric: took 13.428405ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956235   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964542   47779 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.964569   47779 pod_ready.go:81] duration metric: took 8.32705ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964581   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791355   47779 pod_ready.go:92] pod "kube-proxy-k4hzc" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:51.791376   47779 pod_ready.go:81] duration metric: took 826.787812ms waiting for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791384   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078670   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:52.078700   47779 pod_ready.go:81] duration metric: took 287.306474ms waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078714   47779 pod_ready.go:38] duration metric: took 2.848014299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:52.078733   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:52:52.078789   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:52:52.094414   47779 api_server.go:72] duration metric: took 5.08747775s to wait for apiserver process to appear ...
	I0626 20:52:52.094444   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:52:52.094468   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:52:52.101300   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:52:52.102682   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:52:52.102703   47779 api_server.go:131] duration metric: took 8.250707ms to wait for apiserver health ...
	I0626 20:52:52.102712   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:52:52.283428   47779 system_pods.go:59] 9 kube-system pods found
	I0626 20:52:52.283459   47779 system_pods.go:61] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.283467   47779 system_pods.go:61] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.283474   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.283482   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.283488   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.283493   47779 system_pods.go:61] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.283500   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.283511   47779 system_pods.go:61] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.283519   47779 system_pods.go:61] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.283527   47779 system_pods.go:74] duration metric: took 180.810034ms to wait for pod list to return data ...
	I0626 20:52:52.283540   47779 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:52:52.478374   47779 default_sa.go:45] found service account: "default"
	I0626 20:52:52.478400   47779 default_sa.go:55] duration metric: took 194.853163ms for default service account to be created ...
	I0626 20:52:52.478418   47779 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:52:52.683697   47779 system_pods.go:86] 9 kube-system pods found
	I0626 20:52:52.683724   47779 system_pods.go:89] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.683730   47779 system_pods.go:89] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.683735   47779 system_pods.go:89] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.683740   47779 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.683745   47779 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.683748   47779 system_pods.go:89] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.683752   47779 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.683761   47779 system_pods.go:89] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.683773   47779 system_pods.go:89] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.683789   47779 system_pods.go:126] duration metric: took 205.364587ms to wait for k8s-apps to be running ...
	I0626 20:52:52.683798   47779 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:52:52.683846   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:52.698439   47779 system_svc.go:56] duration metric: took 14.634482ms WaitForService to wait for kubelet.
	I0626 20:52:52.698463   47779 kubeadm.go:581] duration metric: took 5.691531199s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:52:52.698480   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:52:52.879414   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:52:52.879441   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:52:52.879454   47779 node_conditions.go:105] duration metric: took 180.969761ms to run NodePressure ...
	I0626 20:52:52.879466   47779 start.go:228] waiting for startup goroutines ...
	I0626 20:52:52.879473   47779 start.go:233] waiting for cluster config update ...
	I0626 20:52:52.879484   47779 start.go:242] writing updated cluster config ...
	I0626 20:52:52.879748   47779 ssh_runner.go:195] Run: rm -f paused
	I0626 20:52:52.928982   47779 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:52:52.930701   47779 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-473235" cluster and "default" namespace by default
	I0626 20:52:49.513843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.515851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:54.013443   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:52.901965   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:55.400541   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:56.014186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:58.516445   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:57.900857   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:59.901944   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:01.013089   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:03.015510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:02.400534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:04.400691   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:06.401897   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:05.513529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.013510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.901751   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:11.400891   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:10.513562   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:12.515529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:13.900503   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:15.900570   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:14.208647   46683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.126299276s)
	I0626 20:53:14.208727   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:14.222919   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:53:14.234762   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:53:14.244800   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:53:14.244840   46683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0626 20:53:14.465786   46683 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:53:15.014781   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.017400   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.901367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:20.401697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:19.515459   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.015763   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.900407   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:24.901270   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.255771   46683 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0626 20:53:27.255867   46683 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:53:27.255968   46683 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:53:27.256115   46683 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:53:27.256237   46683 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:53:27.256368   46683 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:53:27.256489   46683 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:53:27.256550   46683 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0626 20:53:27.256604   46683 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:53:27.258050   46683 out.go:204]   - Generating certificates and keys ...
	I0626 20:53:27.258140   46683 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:53:27.258235   46683 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:53:27.258357   46683 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:53:27.258441   46683 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:53:27.258554   46683 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:53:27.258611   46683 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:53:27.258665   46683 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:53:27.258737   46683 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:53:27.258832   46683 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:53:27.258907   46683 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:53:27.258954   46683 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:53:27.259034   46683 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:53:27.259106   46683 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:53:27.259170   46683 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:53:27.259247   46683 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:53:27.259325   46683 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:53:27.259410   46683 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:53:27.260969   46683 out.go:204]   - Booting up control plane ...
	I0626 20:53:27.261074   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:53:27.261181   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:53:27.261257   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:53:27.261341   46683 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:53:27.261496   46683 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:53:27.261599   46683 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003012 seconds
	I0626 20:53:27.261709   46683 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:53:27.261854   46683 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:53:27.261940   46683 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:53:27.262112   46683 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-490377 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 20:53:27.262210   46683 kubeadm.go:322] [bootstrap-token] Using token: 9pdj92.0ssfpvr0ns0ww3t3
	I0626 20:53:27.263670   46683 out.go:204]   - Configuring RBAC rules ...
	I0626 20:53:27.263769   46683 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:53:27.263903   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:53:27.264029   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:53:27.264172   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:53:27.264278   46683 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:53:27.264333   46683 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:53:27.264372   46683 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:53:27.264379   46683 kubeadm.go:322] 
	I0626 20:53:27.264445   46683 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:53:27.264454   46683 kubeadm.go:322] 
	I0626 20:53:27.264557   46683 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:53:27.264568   46683 kubeadm.go:322] 
	I0626 20:53:27.264598   46683 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:53:27.264668   46683 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:53:27.264715   46683 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:53:27.264721   46683 kubeadm.go:322] 
	I0626 20:53:27.264769   46683 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:53:27.264846   46683 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:53:27.264934   46683 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:53:27.264943   46683 kubeadm.go:322] 
	I0626 20:53:27.265038   46683 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0626 20:53:27.265101   46683 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:53:27.265107   46683 kubeadm.go:322] 
	I0626 20:53:27.265171   46683 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265269   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:53:27.265292   46683 kubeadm.go:322]     --control-plane 	  
	I0626 20:53:27.265298   46683 kubeadm.go:322] 
	I0626 20:53:27.265439   46683 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:53:27.265451   46683 kubeadm.go:322] 
	I0626 20:53:27.265581   46683 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265739   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:53:27.265753   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:53:27.265765   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:53:27.267293   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:53:24.515093   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.014403   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.401630   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:29.404203   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.268439   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:53:27.281135   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:53:27.304145   46683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:53:27.304275   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=old-k8s-version-490377 minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.304277   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.555789   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.571040   46683 ops.go:34] apiserver oom_adj: -16
	I0626 20:53:28.180843   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:28.681089   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.180441   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.680355   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.180860   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.680971   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.181088   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.680352   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.516069   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.013135   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.013391   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:31.901777   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.400314   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:36.400967   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.180338   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:32.680389   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.180568   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.681010   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.180575   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.680905   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.180640   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.680412   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.181081   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.680836   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.514263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:39.013193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:38.900309   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:40.900622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:37.181178   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:37.680710   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.180280   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.680304   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.681177   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.180431   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.681031   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.180847   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.681058   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.680883   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.800538   46683 kubeadm.go:1081] duration metric: took 15.496322508s to wait for elevateKubeSystemPrivileges.
	I0626 20:53:42.800568   46683 kubeadm.go:406] StartCluster complete in 5m56.189450192s
	I0626 20:53:42.800584   46683 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.800661   46683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:53:42.802530   46683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.802755   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:53:42.802810   46683 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:53:42.802908   46683 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802926   46683 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-490377"
	W0626 20:53:42.802936   46683 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:53:42.802934   46683 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802953   46683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-490377"
	I0626 20:53:42.802972   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:53:42.802983   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.802974   46683 addons.go:66] Setting metrics-server=true in profile "old-k8s-version-490377"
	I0626 20:53:42.803034   46683 addons.go:228] Setting addon metrics-server=true in "old-k8s-version-490377"
	W0626 20:53:42.803048   46683 addons.go:237] addon metrics-server should already be in state true
	I0626 20:53:42.803155   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.803353   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803394   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803437   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803468   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803563   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803607   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.822676   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0626 20:53:42.822891   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0626 20:53:42.823127   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823221   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0626 20:53:42.823284   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823599   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823763   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823771   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823783   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.823790   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824056   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.824082   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824096   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824141   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824310   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.824408   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824656   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824682   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.824924   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824954   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.839635   46683 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-490377"
	W0626 20:53:42.839655   46683 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:53:42.839675   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.840131   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.840171   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.846479   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0626 20:53:42.847180   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.847711   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.847728   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.848194   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.848454   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.848519   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0626 20:53:42.850321   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.850427   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.852331   46683 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:53:42.851252   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.853522   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.853581   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:53:42.853603   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:53:42.853625   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.854082   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.854292   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.856641   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.858158   46683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:53:42.857809   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.859467   46683 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:42.859485   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:53:42.859500   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.859505   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.859528   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.858223   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.858466   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0626 20:53:42.860179   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.860331   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.860421   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.860783   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.860909   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.860923   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.861642   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.862199   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.862246   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.863700   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864103   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.864124   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864413   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.864598   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.864737   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.864867   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.878470   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0626 20:53:42.878961   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.879500   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.879510   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.879860   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.880063   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.881757   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.882028   46683 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:42.882040   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:53:42.882054   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.887689   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.887749   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.887779   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887888   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.888058   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.888203   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.981495   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:53:43.064530   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:53:43.064554   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:53:43.074105   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:43.091876   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:43.132074   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:53:43.132095   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:53:43.219103   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.219133   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:53:43.285081   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.443796   46683 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-490377" context rescaled to 1 replicas
	I0626 20:53:43.443841   46683 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:53:43.445639   46683 out.go:177] * Verifying Kubernetes components...
	I0626 20:53:41.014279   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.515278   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.447458   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:43.642242   46683 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0626 20:53:44.194901   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.102988033s)
	I0626 20:53:44.194990   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195008   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.194932   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120793889s)
	I0626 20:53:44.195085   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195096   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195452   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195466   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195475   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195486   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195493   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195518   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195529   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195714   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195765   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195774   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195816   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195893   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195905   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195922   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195936   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.196171   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.196190   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.196197   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.260680   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.260703   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.260706   46683 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.261103   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261122   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261134   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.261144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.261146   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.261364   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261386   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261396   46683 addons.go:464] Verifying addon metrics-server=true in "old-k8s-version-490377"
	I0626 20:53:44.262936   46683 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:53:42.901604   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.902182   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.264049   46683 addons.go:499] enable addons completed in 1.461244367s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:53:44.318103   46683 node_ready.go:49] node "old-k8s-version-490377" has status "Ready":"True"
	I0626 20:53:44.318135   46683 node_ready.go:38] duration metric: took 57.40895ms waiting for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.318147   46683 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:44.333409   46683 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:46.345926   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:46.015128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.516066   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:47.400802   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:49.901066   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.347529   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:50.847639   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:51.012404   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.012697   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:52.400326   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:54.400932   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.402434   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.345907   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:55.345824   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.345850   46683 pod_ready.go:81] duration metric: took 11.012408828s waiting for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.345858   46683 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350198   46683 pod_ready.go:92] pod "kube-proxy-m7hz7" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.350214   46683 pod_ready.go:81] duration metric: took 4.351274ms waiting for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350222   46683 pod_ready.go:38] duration metric: took 11.032065043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:55.350236   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:53:55.350285   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:53:55.366478   46683 api_server.go:72] duration metric: took 11.922600619s to wait for apiserver process to appear ...
	I0626 20:53:55.366501   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:53:55.366518   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:53:55.373257   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:53:55.374362   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:53:55.374382   46683 api_server.go:131] duration metric: took 7.874169ms to wait for apiserver health ...
	I0626 20:53:55.374390   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:53:55.377704   46683 system_pods.go:59] 4 kube-system pods found
	I0626 20:53:55.377719   46683 system_pods.go:61] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.377724   46683 system_pods.go:61] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.377744   46683 system_pods.go:61] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.377754   46683 system_pods.go:61] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.377759   46683 system_pods.go:74] duration metric: took 3.35753ms to wait for pod list to return data ...
	I0626 20:53:55.377765   46683 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:53:55.379628   46683 default_sa.go:45] found service account: "default"
	I0626 20:53:55.379641   46683 default_sa.go:55] duration metric: took 1.87263ms for default service account to be created ...
	I0626 20:53:55.379647   46683 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:53:55.382155   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.382171   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.382176   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.382183   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.382189   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.382204   46683 retry.go:31] will retry after 310.903974ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.698587   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.698613   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.698618   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.698625   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.698631   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.698646   46683 retry.go:31] will retry after 300.100433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.005356   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.005397   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.005408   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.005419   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.005427   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.005446   46683 retry.go:31] will retry after 407.352435ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.417879   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.417905   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.417910   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.417916   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.417922   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.417935   46683 retry.go:31] will retry after 483.508514ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.013247   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:57.015631   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:58.900650   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.401491   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.906260   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.906282   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.906287   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.906293   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.906301   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.906319   46683 retry.go:31] will retry after 527.167542ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:57.438949   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:57.438985   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:57.438995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:57.439006   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:57.439019   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:57.439038   46683 retry.go:31] will retry after 902.255612ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:58.346131   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:58.346161   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:58.346166   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:58.346173   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:58.346179   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:58.346192   46683 retry.go:31] will retry after 904.271086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.256458   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:59.256489   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:59.256497   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:59.256509   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:59.256517   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:59.256534   46683 retry.go:31] will retry after 1.069634228s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:00.331828   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:00.331858   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:00.331865   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:00.331873   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:00.331879   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:00.331896   46683 retry.go:31] will retry after 1.418598639s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:01.755104   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:01.755131   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:01.755136   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:01.755143   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:01.755149   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:01.755162   46683 retry.go:31] will retry after 1.624135654s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.514847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.515086   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.900425   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:05.900854   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.385085   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:03.385111   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:03.385116   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:03.385122   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:03.385128   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:03.385142   46683 retry.go:31] will retry after 1.861818901s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:05.251844   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:05.251870   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:05.251875   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:05.251882   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:05.251888   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:05.251901   46683 retry.go:31] will retry after 3.23679019s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:06.013786   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.514493   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.399542   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:10.400928   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.494644   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:08.494669   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:08.494674   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:08.494681   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:08.494687   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:08.494700   46683 retry.go:31] will retry after 4.210335189s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:10.514707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.515079   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.415273   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:14.899807   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.709730   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:12.709754   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:12.709759   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:12.709765   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:12.709771   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:12.709785   46683 retry.go:31] will retry after 4.208864521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:15.012766   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:17.012807   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.014851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.901107   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.400540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:21.402204   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.923625   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:16.923654   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:16.923662   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:16.923673   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:16.923682   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:16.923701   46683 retry.go:31] will retry after 6.417296046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:21.514829   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.515117   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.402546   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:25.903195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.347074   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:23.347099   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:23.347105   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Pending
	I0626 20:54:23.347108   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:23.347115   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:23.347121   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:23.347133   46683 retry.go:31] will retry after 7.108155838s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:26.013263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.013708   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.399697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.401036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.460927   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:30.460950   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:30.460955   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:30.460995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:30.461004   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:30.461014   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:30.461027   46683 retry.go:31] will retry after 9.756193162s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:30.514139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.514334   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:34.901064   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:35.013362   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.013815   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.014126   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.400945   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:40.223985   46683 system_pods.go:86] 7 kube-system pods found
	I0626 20:54:40.224009   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:40.224014   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Pending
	I0626 20:54:40.224018   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Pending
	I0626 20:54:40.224022   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:40.224026   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:40.224032   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:40.224037   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:40.224052   46683 retry.go:31] will retry after 8.963386657s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:41.515388   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:44.015053   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:41.900424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:43.901263   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.400098   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.514128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.013743   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.195390   46683 system_pods.go:86] 8 kube-system pods found
	I0626 20:54:49.195416   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:49.195421   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Running
	I0626 20:54:49.195426   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Running
	I0626 20:54:49.195430   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:49.195434   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:49.195438   46683 system_pods.go:89] "kube-scheduler-old-k8s-version-490377" [c6fe04b8-d037-452b-bf41-3719c032b7ef] Running
	I0626 20:54:49.195444   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:49.195450   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:49.195458   46683 system_pods.go:126] duration metric: took 53.81580645s to wait for k8s-apps to be running ...
	I0626 20:54:49.195466   46683 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:54:49.195518   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:54:49.219014   46683 system_svc.go:56] duration metric: took 23.534309ms WaitForService to wait for kubelet.
	I0626 20:54:49.219049   46683 kubeadm.go:581] duration metric: took 1m5.775176119s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:54:49.219089   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:54:49.223397   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:54:49.223426   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:54:49.223438   46683 node_conditions.go:105] duration metric: took 4.339435ms to run NodePressure ...
	I0626 20:54:49.223452   46683 start.go:228] waiting for startup goroutines ...
	I0626 20:54:49.223461   46683 start.go:233] waiting for cluster config update ...
	I0626 20:54:49.223472   46683 start.go:242] writing updated cluster config ...
	I0626 20:54:49.223798   46683 ssh_runner.go:195] Run: rm -f paused
	I0626 20:54:49.277613   46683 start.go:652] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0626 20:54:49.279501   46683 out.go:177] 
	W0626 20:54:49.280841   46683 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0626 20:54:49.282249   46683 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0626 20:54:49.283695   46683 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-490377" cluster and "default" namespace by default
	I0626 20:54:48.401602   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:50.900375   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:51.514071   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.013330   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:52.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.900946   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.013501   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:58.014748   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.901531   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:59.401822   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:00.016725   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:02.514316   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:01.902698   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:04.400011   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:06.402149   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:05.014536   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:07.514975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:08.900297   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.900463   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.013780   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:12.514823   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:13.399907   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.400044   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.014032   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.515161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.907245   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.400962   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.015074   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.514465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.403366   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.900247   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.514993   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.012592   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.013612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.400192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.401917   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.402240   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.015647   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.513844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.900187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.902063   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.514657   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:37.514888   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:38.400753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.902398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.014755   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:42.514599   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:43.401280   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:45.902265   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:44.521736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.016422   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.902334   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:50.400765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:49.515570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.014736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.900293   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.900572   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.514047   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.013346   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.013409   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.400170   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.401528   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.013946   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:03.014845   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.902597   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:04.401919   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:05.514639   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:08.016797   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:06.901493   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:09.400229   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:11.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:10.513478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:12.514938   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:13.403138   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.901738   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.013852   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:17.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:18.400812   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.401025   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.013522   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.015651   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.016747   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.401212   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.401675   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.515343   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:28.515706   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.902301   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:29.401779   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.012844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:33.013826   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.901622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.403688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.993256   47309 pod_ready.go:81] duration metric: took 4m0.000204736s waiting for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:34.993309   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:34.993324   47309 pod_ready.go:38] duration metric: took 4m11.355630262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:34.993352   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:34.993410   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:34.993484   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:35.038316   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.038342   47309 cri.go:89] found id: ""
	I0626 20:56:35.038352   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:35.038414   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.042851   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:35.042914   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:35.076892   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.076925   47309 cri.go:89] found id: ""
	I0626 20:56:35.076934   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:35.076990   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.081850   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:35.081933   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:35.119872   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.119896   47309 cri.go:89] found id: ""
	I0626 20:56:35.119904   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:35.119971   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.124661   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:35.124731   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:35.158899   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.158924   47309 cri.go:89] found id: ""
	I0626 20:56:35.158933   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:35.158991   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.163512   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:35.163587   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:35.195698   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.195721   47309 cri.go:89] found id: ""
	I0626 20:56:35.195729   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:35.195786   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.199883   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:35.199935   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:35.243909   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.243932   47309 cri.go:89] found id: ""
	I0626 20:56:35.243939   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:35.243992   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.248331   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:35.248388   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:35.287985   47309 cri.go:89] found id: ""
	I0626 20:56:35.288009   47309 logs.go:284] 0 containers: []
	W0626 20:56:35.288019   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:35.288026   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:35.288085   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:35.324050   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.324129   47309 cri.go:89] found id: ""
	I0626 20:56:35.324151   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:35.324219   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.328564   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:35.328588   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:35.369968   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:35.369997   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:35.391588   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:35.391615   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:35.542328   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:35.542356   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.579140   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:35.579172   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.635428   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:35.635463   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.674715   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:35.674750   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.732788   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:35.732837   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.774860   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:35.774901   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:35.881082   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:35.881118   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.929445   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:35.929478   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.968723   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:35.968754   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:35.015798   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.514548   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.606375   47605 pod_ready.go:81] duration metric: took 4m0.000950536s waiting for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:37.606403   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:37.606412   47605 pod_ready.go:38] duration metric: took 4m2.78027212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:37.606429   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:37.606459   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:37.606521   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:37.668350   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:37.668383   47605 cri.go:89] found id: ""
	I0626 20:56:37.668391   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:37.668453   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.675583   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:37.675669   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:37.710826   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:37.710852   47605 cri.go:89] found id: ""
	I0626 20:56:37.710860   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:37.710916   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.715610   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:37.715671   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:37.751709   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:37.751784   47605 cri.go:89] found id: ""
	I0626 20:56:37.751812   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:37.751877   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.757177   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:37.757241   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:37.790384   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:37.790413   47605 cri.go:89] found id: ""
	I0626 20:56:37.790420   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:37.790468   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.795294   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:37.795352   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:37.832125   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:37.832157   47605 cri.go:89] found id: ""
	I0626 20:56:37.832168   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:37.832239   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.836762   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:37.836816   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:37.877789   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:37.877817   47605 cri.go:89] found id: ""
	I0626 20:56:37.877827   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:37.877887   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.885276   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:37.885348   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:37.929701   47605 cri.go:89] found id: ""
	I0626 20:56:37.929731   47605 logs.go:284] 0 containers: []
	W0626 20:56:37.929745   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:37.929755   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:37.929815   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:37.970177   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:37.970201   47605 cri.go:89] found id: ""
	I0626 20:56:37.970211   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:37.970270   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.975002   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:37.975025   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:38.022831   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:38.022862   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:38.058414   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:38.058446   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:38.168689   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:38.168726   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:38.183930   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:38.183959   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:38.224623   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:38.224653   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:38.271164   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:38.271205   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:38.308365   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:38.308391   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:38.363321   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:38.363356   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:38.510275   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:38.510310   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:38.552512   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:38.552544   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:38.586122   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:38.586155   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:38.945144   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:38.962999   47309 api_server.go:72] duration metric: took 4m18.467522928s to wait for apiserver process to appear ...
	I0626 20:56:38.963026   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:38.963067   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:38.963129   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:39.002109   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.002133   47309 cri.go:89] found id: ""
	I0626 20:56:39.002141   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:39.002198   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.006799   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:39.006864   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:39.042531   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:39.042556   47309 cri.go:89] found id: ""
	I0626 20:56:39.042566   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:39.042621   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.047228   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:39.047301   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:39.080810   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.080842   47309 cri.go:89] found id: ""
	I0626 20:56:39.080850   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:39.080916   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.085173   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:39.085238   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:39.116857   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:39.116886   47309 cri.go:89] found id: ""
	I0626 20:56:39.116895   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:39.116946   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.121912   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:39.122007   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:39.166886   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.166912   47309 cri.go:89] found id: ""
	I0626 20:56:39.166920   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:39.166972   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.171344   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:39.171420   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:39.205333   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:39.205358   47309 cri.go:89] found id: ""
	I0626 20:56:39.205366   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:39.205445   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.211414   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:39.211491   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:39.249068   47309 cri.go:89] found id: ""
	I0626 20:56:39.249092   47309 logs.go:284] 0 containers: []
	W0626 20:56:39.249103   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:39.249110   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:39.249171   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:39.283295   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.283314   47309 cri.go:89] found id: ""
	I0626 20:56:39.283325   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:39.283372   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.287514   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:39.287537   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:39.420720   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:39.420752   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.479018   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:39.479052   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.512285   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:39.512313   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.549886   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:39.549922   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.590619   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:39.590647   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:40.076597   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:40.076642   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:40.092551   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:40.092581   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:40.135655   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:40.135699   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:40.184590   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:40.184628   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:40.238354   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:40.238393   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:40.283033   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:40.283075   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:41.567686   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:41.584431   47605 api_server.go:72] duration metric: took 4m9.528462616s to wait for apiserver process to appear ...
	I0626 20:56:41.584462   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:41.584492   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:41.584553   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:41.622027   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:41.622051   47605 cri.go:89] found id: ""
	I0626 20:56:41.622061   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:41.622119   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.626209   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:41.626271   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:41.658658   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:41.658680   47605 cri.go:89] found id: ""
	I0626 20:56:41.658689   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:41.658746   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.666357   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:41.666437   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:41.702344   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:41.702369   47605 cri.go:89] found id: ""
	I0626 20:56:41.702378   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:41.702443   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.706706   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:41.706775   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:41.743534   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:41.743554   47605 cri.go:89] found id: ""
	I0626 20:56:41.743561   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:41.743619   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.748338   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:41.748408   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:41.780299   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:41.780324   47605 cri.go:89] found id: ""
	I0626 20:56:41.780333   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:41.780392   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.785308   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:41.785395   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:41.819335   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:41.819361   47605 cri.go:89] found id: ""
	I0626 20:56:41.819370   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:41.819415   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.823767   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:41.823832   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:41.855049   47605 cri.go:89] found id: ""
	I0626 20:56:41.855079   47605 logs.go:284] 0 containers: []
	W0626 20:56:41.855088   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:41.855094   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:41.855147   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:41.886378   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:41.886400   47605 cri.go:89] found id: ""
	I0626 20:56:41.886408   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:41.886459   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.891748   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:41.891777   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:42.003933   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:42.003968   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:42.018182   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:42.018230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:42.145038   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:42.145074   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:42.181403   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:42.181438   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:42.224428   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:42.224467   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:42.260067   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:42.260097   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:42.312924   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:42.312972   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:42.347173   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:42.347203   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:42.920689   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:42.920725   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:42.970428   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:42.970456   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:43.021561   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.021587   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:42.886551   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:56:42.892462   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:56:42.894253   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:42.894277   47309 api_server.go:131] duration metric: took 3.931242905s to wait for apiserver health ...
	I0626 20:56:42.894286   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:42.894309   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:42.894364   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:42.931699   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:42.931728   47309 cri.go:89] found id: ""
	I0626 20:56:42.931736   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:42.931792   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.936873   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:42.936944   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:42.968701   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:42.968720   47309 cri.go:89] found id: ""
	I0626 20:56:42.968727   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:42.968778   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.974309   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:42.974381   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:43.010388   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:43.010416   47309 cri.go:89] found id: ""
	I0626 20:56:43.010425   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:43.010482   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.015524   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:43.015582   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:43.049074   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.049103   47309 cri.go:89] found id: ""
	I0626 20:56:43.049112   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:43.049173   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.053750   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:43.053814   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:43.096699   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:43.096727   47309 cri.go:89] found id: ""
	I0626 20:56:43.096734   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:43.096776   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.101210   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:43.101264   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:43.133316   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:43.133344   47309 cri.go:89] found id: ""
	I0626 20:56:43.133354   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:43.133420   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.138226   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:43.138289   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:43.169863   47309 cri.go:89] found id: ""
	I0626 20:56:43.169896   47309 logs.go:284] 0 containers: []
	W0626 20:56:43.169903   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:43.169908   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:43.169962   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:43.201859   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.201884   47309 cri.go:89] found id: ""
	I0626 20:56:43.201892   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:43.201942   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.207043   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:43.207072   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.264723   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:43.264755   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.301988   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.302016   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:43.344103   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:43.344132   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:43.357414   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:43.357445   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:43.486425   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:43.486453   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:43.529205   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:43.529239   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:43.575311   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:43.575344   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:44.074749   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:44.074790   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:44.184946   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:44.184987   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:44.221993   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:44.222028   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:44.263095   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:44.263127   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:46.817987   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:46.818014   47309 system_pods.go:61] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.818019   47309 system_pods.go:61] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.818023   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.818027   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.818031   47309 system_pods.go:61] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.818035   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.818041   47309 system_pods.go:61] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.818047   47309 system_pods.go:61] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.818052   47309 system_pods.go:74] duration metric: took 3.923762125s to wait for pod list to return data ...
	I0626 20:56:46.818061   47309 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:46.821789   47309 default_sa.go:45] found service account: "default"
	I0626 20:56:46.821811   47309 default_sa.go:55] duration metric: took 3.746079ms for default service account to be created ...
	I0626 20:56:46.821818   47309 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:46.830080   47309 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:46.830117   47309 system_pods.go:89] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.830127   47309 system_pods.go:89] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.830134   47309 system_pods.go:89] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.830141   47309 system_pods.go:89] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.830147   47309 system_pods.go:89] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.830153   47309 system_pods.go:89] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.830165   47309 system_pods.go:89] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.830178   47309 system_pods.go:89] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.830186   47309 system_pods.go:126] duration metric: took 8.363064ms to wait for k8s-apps to be running ...
	I0626 20:56:46.830198   47309 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:46.830250   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:46.851429   47309 system_svc.go:56] duration metric: took 21.223321ms WaitForService to wait for kubelet.
	I0626 20:56:46.851456   47309 kubeadm.go:581] duration metric: took 4m26.355992846s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:46.851482   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:46.856152   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:46.856177   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:46.856187   47309 node_conditions.go:105] duration metric: took 4.700595ms to run NodePressure ...
	I0626 20:56:46.856197   47309 start.go:228] waiting for startup goroutines ...
	I0626 20:56:46.856203   47309 start.go:233] waiting for cluster config update ...
	I0626 20:56:46.856212   47309 start.go:242] writing updated cluster config ...
	I0626 20:56:46.856472   47309 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:46.911414   47309 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:46.913280   47309 out.go:177] * Done! kubectl is now configured to use "no-preload-934450" cluster and "default" namespace by default
	I0626 20:56:45.561459   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:56:45.567555   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:56:45.568704   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:45.568720   47605 api_server.go:131] duration metric: took 3.984252941s to wait for apiserver health ...
	I0626 20:56:45.568728   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:45.568745   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:45.568789   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:45.608235   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:45.608261   47605 cri.go:89] found id: ""
	I0626 20:56:45.608270   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:45.608335   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.612705   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:45.612774   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:45.649330   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.649353   47605 cri.go:89] found id: ""
	I0626 20:56:45.649362   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:45.649440   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.655104   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:45.655178   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:45.699690   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.699711   47605 cri.go:89] found id: ""
	I0626 20:56:45.699722   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:45.699767   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.704455   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:45.704515   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:45.743181   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:45.743209   47605 cri.go:89] found id: ""
	I0626 20:56:45.743218   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:45.743283   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.748030   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:45.748098   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:45.787325   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:45.787352   47605 cri.go:89] found id: ""
	I0626 20:56:45.787360   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:45.787406   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.792119   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:45.792191   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:45.833192   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:45.833215   47605 cri.go:89] found id: ""
	I0626 20:56:45.833222   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:45.833279   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.838399   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:45.838464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:45.878372   47605 cri.go:89] found id: ""
	I0626 20:56:45.878403   47605 logs.go:284] 0 containers: []
	W0626 20:56:45.878410   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:45.878415   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:45.878464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:45.917051   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:45.917074   47605 cri.go:89] found id: ""
	I0626 20:56:45.917081   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:45.917125   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.921484   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:45.921508   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.962659   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:45.962699   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.993644   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:45.993674   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:46.055087   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:46.055130   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:46.574535   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:46.574581   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:46.617139   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:46.617174   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:46.729727   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:46.729768   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:46.860871   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:46.860908   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:46.922618   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:46.922657   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:46.975973   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:46.976000   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:47.017458   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:47.017488   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:47.058540   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:47.058567   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:49.582112   47605 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:49.582139   47605 system_pods.go:61] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.582145   47605 system_pods.go:61] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.582149   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.582153   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.582157   47605 system_pods.go:61] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.582163   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.582169   47605 system_pods.go:61] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.582175   47605 system_pods.go:61] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.582180   47605 system_pods.go:74] duration metric: took 4.013448182s to wait for pod list to return data ...
	I0626 20:56:49.582187   47605 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:49.588793   47605 default_sa.go:45] found service account: "default"
	I0626 20:56:49.588827   47605 default_sa.go:55] duration metric: took 6.634132ms for default service account to be created ...
	I0626 20:56:49.588836   47605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:49.596519   47605 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:49.596549   47605 system_pods.go:89] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.596555   47605 system_pods.go:89] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.596562   47605 system_pods.go:89] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.596570   47605 system_pods.go:89] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.596577   47605 system_pods.go:89] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.596585   47605 system_pods.go:89] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.596600   47605 system_pods.go:89] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.596612   47605 system_pods.go:89] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.596622   47605 system_pods.go:126] duration metric: took 7.781697ms to wait for k8s-apps to be running ...
	I0626 20:56:49.596633   47605 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:49.596684   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:49.613188   47605 system_svc.go:56] duration metric: took 16.545322ms WaitForService to wait for kubelet.
	I0626 20:56:49.613212   47605 kubeadm.go:581] duration metric: took 4m17.557252465s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:49.613231   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:49.616820   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:49.616845   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:49.616854   47605 node_conditions.go:105] duration metric: took 3.619443ms to run NodePressure ...
	I0626 20:56:49.616864   47605 start.go:228] waiting for startup goroutines ...
	I0626 20:56:49.616870   47605 start.go:233] waiting for cluster config update ...
	I0626 20:56:49.616878   47605 start.go:242] writing updated cluster config ...
	I0626 20:56:49.617126   47605 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:49.665468   47605 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:49.667447   47605 out.go:177] * Done! kubectl is now configured to use "embed-certs-299839" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:46:44 UTC, ends at Mon 2023-06-26 21:05:51 UTC. --
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.550593682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=866eebe4-9f96-4f8f-a485-21612ca08123 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.550964325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=866eebe4-9f96-4f8f-a485-21612ca08123 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.570554013Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=9a935598-f0bd-47e1-9aa2-d14d0d059f86 name=/runtime.v1.ImageService/ListImages
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.570855687Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.570987116Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571058156Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571137552Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571205469Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571276119Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571349174Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571421653Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571617799Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571708072Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"" file="storage/storage_transport.go:185"
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.571857258Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,RepoTags:[registry.k8s.io/kube-apiserver:v1.27.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0],Size_:122065872,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,RepoTags:[registry.k8s.io/kube-controller-manager:v1.27.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06],Size_:113919286,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:41697ceeb70b3f4
9e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,RepoTags:[registry.k8s.io/kube-scheduler:v1.27.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082 registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8],Size_:59811126,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,RepoTags:[registry.k8s.io/kube-proxy:v1.27.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699],Size_:72713623,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd280
01e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,RepoTags:[registry.k8s.io/etcd:3.5.7-0],RepoDigests:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9],Size_:297083935,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:
[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,RepoTags:[docker.io/kindest/kindnetd:v20230511-dc714da8],RepoDigests:[docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974 docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9],Size_:65249302,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Use
rname:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=9a935598-f0bd-47e1-9aa2-d14d0d059f86 name=/runtime.v1.ImageService/ListImages
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.580298285Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6dfe9a41-bf44-4fd2-9b1f-52174421acc6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.580645959Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e09aa5b8586bc32938ec1f1ce155641064677e97f1883ccda29245cfc57eefa1,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-vkggw,Uid:147679d1-7453-4e55-862c-fec18e08ba84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812754855262572,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-vkggw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 147679d1-7453-4e55-862c-fec18e08ba84,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:52:34.511961683Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:51730db4-00b6-4240-917c-fed87615fd6e,Name
space:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812754501197082,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volume
s\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-06-26T20:52:34.160569126Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-tl42z,Uid:429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812752523743725,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:52:31.893730473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&PodSandboxMetadata{Name:kube-proxy-scfwr,Uid:60aed765-875d-4023-9ce9-97b5a
6a47995,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812751978156593,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:52:31.642280625Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-299839,Uid:916973a30c4bd49353b106072d59cc46,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812730754199607,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59
cc46,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 916973a30c4bd49353b106072d59cc46,kubernetes.io/config.seen: 2023-06-26T20:52:10.199367344Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-299839,Uid:2d6de7e6c5751e431a9ee06dd0d7ceee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812730745855062,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2d6de7e6c5751e431a9ee06dd0d7ceee,kubernetes.io/config.seen: 2023-06-26T20:52:10.199366357Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:45d98859a15fceb5152c
2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-299839,Uid:03830abe69457302243911b537c06ef5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812730733163971,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03830abe69457302243911b537c06ef5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.51:8443,kubernetes.io/config.hash: 03830abe69457302243911b537c06ef5,kubernetes.io/config.seen: 2023-06-26T20:52:10.199365165Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-299839,Uid:829ae1cb17e2bb94bba22c9e79b6c706,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:168
7812730690324297,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c706,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.51:2379,kubernetes.io/config.hash: 829ae1cb17e2bb94bba22c9e79b6c706,kubernetes.io/config.seen: 2023-06-26T20:52:10.199360772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6dfe9a41-bf44-4fd2-9b1f-52174421acc6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.587013367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dbfd42fb-dd86-4258-aec3-e0b5e4c86aa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.587114077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dbfd42fb-dd86-4258-aec3-e0b5e4c86aa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.587322670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dbfd42fb-dd86-4258-aec3-e0b5e4c86aa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.612176929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=385a2aa3-9ae1-4c84-a141-b103fd1c92f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.612314117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=385a2aa3-9ae1-4c84-a141-b103fd1c92f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.612584335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=385a2aa3-9ae1-4c84-a141-b103fd1c92f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.649633927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8e075ca0-d625-4806-9144-57b673c5c7a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.649751368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8e075ca0-d625-4806-9144-57b673c5c7a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:05:51 embed-certs-299839 crio[740]: time="2023-06-26 21:05:51.649961329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e075ca0-d625-4806-9144-57b673c5c7a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	f87813547f704       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   7474cf64113f1
	3aa7ee4c1eadc       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   13 minutes ago      Running             kube-proxy                0                   e267a7bb0e9d6
	f5850ea0b11e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   357dd9f10db56
	e492b7211ab33       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   13 minutes ago      Running             kube-controller-manager   2                   1547cb51040eb
	c6b6f0adc88c6       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   13 minutes ago      Running             kube-scheduler            2                   8912d1e8039d2
	e57e4ae17d5c5       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   13 minutes ago      Running             etcd                      2                   cef0947b2f374
	8f534a31963ab       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   13 minutes ago      Running             kube-apiserver            2                   45d98859a15fc
	
	* 
	* ==> coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33072 - 16870 "HINFO IN 3099440260193012276.5770977196869146280. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015791912s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-299839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-299839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=embed-certs-299839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:52:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-299839
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 21:05:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:02:53 +0000   Mon, 26 Jun 2023 20:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:02:53 +0000   Mon, 26 Jun 2023 20:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:02:53 +0000   Mon, 26 Jun 2023 20:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:02:53 +0000   Mon, 26 Jun 2023 20:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    embed-certs-299839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a134bb846064955a35a246d03c68303
	  System UUID:                0a134bb8-4606-4955-a35a-246d03c68303
	  Boot ID:                    f1a5622f-2af5-4c66-aabf-2d107fda507d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-tl42z                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-299839                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-299839             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-299839    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-scfwr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-299839             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-74d5c6b9c-vkggw                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node embed-certs-299839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node embed-certs-299839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node embed-certs-299839 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node embed-certs-299839 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node embed-certs-299839 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node embed-certs-299839 event: Registered Node embed-certs-299839 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun26 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.217135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.218815] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134412] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.551095] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.181934] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.119493] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.154677] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.135879] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +0.226171] systemd-fstab-generator[723]: Ignoring "noauto" for root device
	[Jun26 20:47] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[ +19.813990] kauditd_printk_skb: 34 callbacks suppressed
	[Jun26 20:52] systemd-fstab-generator[3710]: Ignoring "noauto" for root device
	[  +9.806377] systemd-fstab-generator[4038]: Ignoring "noauto" for root device
	[ +21.665236] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] <==
	* {"level":"info","ts":"2023-06-26T20:52:13.086Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.51:2380"}
	{"level":"info","ts":"2023-06-26T20:52:13.081Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-26T20:52:13.087Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"9049a3446d48952a","initial-advertise-peer-urls":["https://192.168.39.51:2380"],"listen-peer-urls":["https://192.168.39.51:2380"],"advertise-client-urls":["https://192.168.39.51:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.51:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-26T20:52:13.088Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a received MsgPreVoteResp from 9049a3446d48952a at term 1"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became candidate at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a received MsgVoteResp from 9049a3446d48952a at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9049a3446d48952a became leader at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:13.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9049a3446d48952a elected leader 9049a3446d48952a at term 2"}
	{"level":"info","ts":"2023-06-26T20:52:13.996Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9049a3446d48952a","local-member-attributes":"{Name:embed-certs-299839 ClientURLs:[https://192.168.39.51:2379]}","request-path":"/0/members/9049a3446d48952a/attributes","cluster-id":"ec92057c53901c6c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-26T20:52:13.996Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:13.997Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.51:2379"}
	{"level":"info","ts":"2023-06-26T20:52:13.998Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:14.002Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-26T20:52:14.009Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T21:02:14.049Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":690}
	{"level":"info","ts":"2023-06-26T21:02:14.052Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":690,"took":"1.988775ms","hash":3570885676}
	{"level":"info","ts":"2023-06-26T21:02:14.052Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3570885676,"revision":690,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  21:05:52 up 19 min,  0 users,  load average: 0.18, 0.17, 0.15
	Linux embed-certs-299839 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] <==
	* I0626 21:02:16.834330       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:02:16.834588       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:02:16.834802       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:02:16.835984       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:03:15.719300       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.135.1:443: connect: connection refused
	I0626 21:03:15.719343       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:03:16.835539       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:03:16.836304       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:03:16.836394       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:03:16.836567       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:03:16.836607       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:03:16.838028       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:04:15.718781       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.135.1:443: connect: connection refused
	I0626 21:04:15.719041       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0626 21:05:15.719260       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.135.1:443: connect: connection refused
	I0626 21:05:15.719334       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:05:16.837368       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:05:16.837610       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:05:16.837657       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:05:16.838593       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:05:16.838680       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:05:16.838689       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] <==
	* W0626 20:59:31.233987       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:00:00.784380       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:00:01.242166       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:00:30.791028       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:00:31.252114       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:01:00.797000       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:01:01.261384       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:01:30.802876       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:01:31.274930       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:02:00.808427       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:02:01.284815       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:02:30.814387       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:02:31.294665       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:00.821310       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:01.302718       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:30.827333       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:31.310955       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:00.833347       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:01.320298       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:30.841640       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:31.331846       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:00.851785       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:01.341602       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:30.858608       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:31.351012       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] <==
	* I0626 20:52:36.503863       1 node.go:141] Successfully retrieved node IP: 192.168.39.51
	I0626 20:52:36.504036       1 server_others.go:110] "Detected node IP" address="192.168.39.51"
	I0626 20:52:36.504108       1 server_others.go:554] "Using iptables proxy"
	I0626 20:52:36.565119       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:52:36.565216       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:52:36.566137       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:52:36.567379       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:52:36.567430       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:52:36.569149       1 config.go:188] "Starting service config controller"
	I0626 20:52:36.569755       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:52:36.570082       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:52:36.570118       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:52:36.572296       1 config.go:315] "Starting node config controller"
	I0626 20:52:36.572337       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:52:36.670580       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:52:36.670596       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:52:36.672538       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] <==
	* W0626 20:52:16.665087       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:52:16.665195       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 20:52:16.780738       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:16.780823       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:16.780885       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:52:16.780922       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0626 20:52:16.812837       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:52:16.813377       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 20:52:16.868594       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:16.868660       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:16.974699       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0626 20:52:16.974751       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0626 20:52:17.026047       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:52:17.026108       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:52:17.033006       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:52:17.033118       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 20:52:17.079585       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:52:17.079639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:52:17.084344       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:17.084422       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:17.181973       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:17.182080       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:17.218805       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:17.218976       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0626 20:52:20.023940       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:46:44 UTC, ends at Mon 2023-06-26 21:05:52 UTC. --
	Jun 26 21:03:19 embed-certs-299839 kubelet[4045]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:03:19 embed-certs-299839 kubelet[4045]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:03:26 embed-certs-299839 kubelet[4045]: E0626 21:03:26.617278    4045 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:03:26 embed-certs-299839 kubelet[4045]: E0626 21:03:26.617376    4045 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:03:26 embed-certs-299839 kubelet[4045]: E0626 21:03:26.617690    4045 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9nmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-vkggw_kube-system(147679d1-7453-4e55-862c-fec18e08ba84): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:03:26 embed-certs-299839 kubelet[4045]: E0626 21:03:26.617747    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:03:37 embed-certs-299839 kubelet[4045]: E0626 21:03:37.581629    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:03:49 embed-certs-299839 kubelet[4045]: E0626 21:03:49.580859    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:04:04 embed-certs-299839 kubelet[4045]: E0626 21:04:04.581225    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:04:16 embed-certs-299839 kubelet[4045]: E0626 21:04:16.579737    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:04:19 embed-certs-299839 kubelet[4045]: E0626 21:04:19.703525    4045 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:04:19 embed-certs-299839 kubelet[4045]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:04:19 embed-certs-299839 kubelet[4045]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:04:19 embed-certs-299839 kubelet[4045]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:04:28 embed-certs-299839 kubelet[4045]: E0626 21:04:28.580548    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:04:43 embed-certs-299839 kubelet[4045]: E0626 21:04:43.580104    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:04:56 embed-certs-299839 kubelet[4045]: E0626 21:04:56.580098    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:05:08 embed-certs-299839 kubelet[4045]: E0626 21:05:08.580877    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:05:19 embed-certs-299839 kubelet[4045]: E0626 21:05:19.703114    4045 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:05:19 embed-certs-299839 kubelet[4045]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:05:19 embed-certs-299839 kubelet[4045]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:05:19 embed-certs-299839 kubelet[4045]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:05:20 embed-certs-299839 kubelet[4045]: E0626 21:05:20.581013    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:05:35 embed-certs-299839 kubelet[4045]: E0626 21:05:35.580204    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:05:49 embed-certs-299839 kubelet[4045]: E0626 21:05:49.582165    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	
	* 
	* ==> storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] <==
	* I0626 20:52:36.414503       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:52:36.431317       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:52:36.432340       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:52:36.447880       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:52:36.448133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-299839_f79cd480-b3e0-448b-a8c4-e03ac591d538!
	I0626 20:52:36.450187       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"57ff7a0a-6fb7-4c94-ada5-fb66605cf24f", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-299839_f79cd480-b3e0-448b-a8c4-e03ac591d538 became leader
	I0626 20:52:36.549614       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-299839_f79cd480-b3e0-448b-a8c4-e03ac591d538!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299839 -n embed-certs-299839
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-299839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-vkggw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-299839 describe pod metrics-server-74d5c6b9c-vkggw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-299839 describe pod metrics-server-74d5c6b9c-vkggw: exit status 1 (73.134369ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-vkggw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-299839 describe pod metrics-server-74d5c6b9c-vkggw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (463.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0626 21:03:11.372220   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 21:03:30.704983   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:09:38.354064109 +0000 UTC m=+5642.854091935
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-473235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.595µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-473235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-473235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-473235 logs -n 25: (1.262357351s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 21:06 UTC | 26 Jun 23 21:06 UTC |
	| start   | -p newest-cni-421460 --memory=2200 --alsologtostderr   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:06 UTC | 26 Jun 23 21:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-421460             | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:07 UTC | 26 Jun 23 21:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:07 UTC | 26 Jun 23 21:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-421460                  | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-421460 --memory=2200 --alsologtostderr   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-421460 sudo                              | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| delete  | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| start   | -p auto-606105 --memory=3072                           | auto-606105                  | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| start   | -p kindnet-606105                                      | kindnet-606105               | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| start   | -p calico-606105 --memory=3072                         | calico-606105                | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 21:09:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 21:09:25.050406   54312 out.go:296] Setting OutFile to fd 1 ...
	I0626 21:09:25.050516   54312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 21:09:25.050525   54312 out.go:309] Setting ErrFile to fd 2...
	I0626 21:09:25.050530   54312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 21:09:25.050649   54312 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 21:09:25.051197   54312 out.go:303] Setting JSON to false
	I0626 21:09:25.052087   54312 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6712,"bootTime":1687807053,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 21:09:25.052142   54312 start.go:137] virtualization: kvm guest
	I0626 21:09:25.054949   54312 out.go:177] * [calico-606105] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 21:09:25.056581   54312 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 21:09:25.058010   54312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 21:09:25.056603   54312 notify.go:220] Checking for updates...
	I0626 21:09:25.059731   54312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 21:09:25.061563   54312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:25.063304   54312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 21:09:25.064795   54312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 21:09:25.066955   54312 config.go:182] Loaded profile config "auto-606105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:25.067060   54312 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:25.067133   54312 config.go:182] Loaded profile config "kindnet-606105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:25.067219   54312 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 21:09:25.103965   54312 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 21:09:25.105312   54312 start.go:297] selected driver: kvm2
	I0626 21:09:25.105330   54312 start.go:954] validating driver "kvm2" against <nil>
	I0626 21:09:25.105343   54312 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 21:09:25.106018   54312 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 21:09:25.106104   54312 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 21:09:25.122258   54312 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 21:09:25.122304   54312 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 21:09:25.122495   54312 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 21:09:25.122520   54312 cni.go:84] Creating CNI manager for "calico"
	I0626 21:09:25.122525   54312 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0626 21:09:25.122532   54312 start_flags.go:319] config:
	{Name:calico-606105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 21:09:25.122653   54312 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 21:09:25.124482   54312 out.go:177] * Starting control plane node calico-606105 in cluster calico-606105
	I0626 21:09:22.160442   54061 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 21:09:22.160487   54061 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 21:09:22.160506   54061 cache.go:57] Caching tarball of preloaded images
	I0626 21:09:22.160591   54061 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 21:09:22.160612   54061 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 21:09:22.160745   54061 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/kindnet-606105/config.json ...
	I0626 21:09:22.160773   54061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/kindnet-606105/config.json: {Name:mk107c8169e960772166a521211ca7f8a6a352b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:22.160898   54061 start.go:365] acquiring machines lock for kindnet-606105: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 21:09:26.530595   54061 start.go:369] acquired machines lock for "kindnet-606105" in 4.369605638s
	I0626 21:09:26.530658   54061 start.go:93] Provisioning new machine with config: &{Name:kindnet-606105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:kindnet-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 21:09:26.530770   54061 start.go:125] createHost starting for "" (driver="kvm2")
	I0626 21:09:26.532840   54061 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0626 21:09:26.533019   54061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 21:09:26.533068   54061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 21:09:26.551715   54061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0626 21:09:26.552182   54061 main.go:141] libmachine: () Calling .GetVersion
	I0626 21:09:26.552750   54061 main.go:141] libmachine: Using API Version  1
	I0626 21:09:26.552776   54061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 21:09:26.553156   54061 main.go:141] libmachine: () Calling .GetMachineName
	I0626 21:09:26.553348   54061 main.go:141] libmachine: (kindnet-606105) Calling .GetMachineName
	I0626 21:09:26.553536   54061 main.go:141] libmachine: (kindnet-606105) Calling .DriverName
	I0626 21:09:26.553702   54061 start.go:159] libmachine.API.Create for "kindnet-606105" (driver="kvm2")
	I0626 21:09:26.553736   54061 client.go:168] LocalClient.Create starting
	I0626 21:09:26.553769   54061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem
	I0626 21:09:26.553812   54061 main.go:141] libmachine: Decoding PEM data...
	I0626 21:09:26.553838   54061 main.go:141] libmachine: Parsing certificate...
	I0626 21:09:26.553920   54061 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem
	I0626 21:09:26.553948   54061 main.go:141] libmachine: Decoding PEM data...
	I0626 21:09:26.553968   54061 main.go:141] libmachine: Parsing certificate...
	I0626 21:09:26.553998   54061 main.go:141] libmachine: Running pre-create checks...
	I0626 21:09:26.554017   54061 main.go:141] libmachine: (kindnet-606105) Calling .PreCreateCheck
	I0626 21:09:26.554348   54061 main.go:141] libmachine: (kindnet-606105) Calling .GetConfigRaw
	I0626 21:09:26.554801   54061 main.go:141] libmachine: Creating machine...
	I0626 21:09:26.554823   54061 main.go:141] libmachine: (kindnet-606105) Calling .Create
	I0626 21:09:26.554951   54061 main.go:141] libmachine: (kindnet-606105) Creating KVM machine...
	I0626 21:09:26.555945   54061 main.go:141] libmachine: (kindnet-606105) DBG | found existing default KVM network
	I0626 21:09:26.557425   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:26.557255   54334 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f5a0}
	I0626 21:09:26.562633   54061 main.go:141] libmachine: (kindnet-606105) DBG | trying to create private KVM network mk-kindnet-606105 192.168.39.0/24...
	I0626 21:09:26.640768   54061 main.go:141] libmachine: (kindnet-606105) DBG | private KVM network mk-kindnet-606105 192.168.39.0/24 created
	I0626 21:09:26.640802   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:26.640747   54334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:26.640822   54061 main.go:141] libmachine: (kindnet-606105) Setting up store path in /home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105 ...
	I0626 21:09:26.640839   54061 main.go:141] libmachine: (kindnet-606105) Building disk image from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 21:09:26.640949   54061 main.go:141] libmachine: (kindnet-606105) Downloading /home/jenkins/minikube-integration/16761-7242/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso...
	I0626 21:09:26.838576   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:26.838439   54334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105/id_rsa...
	I0626 21:09:26.920656   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:26.920550   54334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105/kindnet-606105.rawdisk...
	I0626 21:09:26.920694   54061 main.go:141] libmachine: (kindnet-606105) DBG | Writing magic tar header
	I0626 21:09:26.920716   54061 main.go:141] libmachine: (kindnet-606105) DBG | Writing SSH key tar header
	I0626 21:09:26.920738   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:26.920665   54334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105 ...
	I0626 21:09:26.920845   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105
	I0626 21:09:26.920874   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines
	I0626 21:09:26.920889   54061 main.go:141] libmachine: (kindnet-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105 (perms=drwx------)
	I0626 21:09:26.920904   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:26.920923   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242
	I0626 21:09:26.920937   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0626 21:09:26.920952   54061 main.go:141] libmachine: (kindnet-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines (perms=drwxr-xr-x)
	I0626 21:09:26.920965   54061 main.go:141] libmachine: (kindnet-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube (perms=drwxr-xr-x)
	I0626 21:09:26.920975   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home/jenkins
	I0626 21:09:26.920989   54061 main.go:141] libmachine: (kindnet-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242 (perms=drwxrwxr-x)
	I0626 21:09:26.921003   54061 main.go:141] libmachine: (kindnet-606105) DBG | Checking permissions on dir: /home
	I0626 21:09:26.921013   54061 main.go:141] libmachine: (kindnet-606105) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0626 21:09:26.921029   54061 main.go:141] libmachine: (kindnet-606105) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0626 21:09:26.921038   54061 main.go:141] libmachine: (kindnet-606105) Creating domain...
	I0626 21:09:26.921054   54061 main.go:141] libmachine: (kindnet-606105) DBG | Skipping /home - not owner
	I0626 21:09:26.922122   54061 main.go:141] libmachine: (kindnet-606105) define libvirt domain using xml: 
	I0626 21:09:26.922148   54061 main.go:141] libmachine: (kindnet-606105) <domain type='kvm'>
	I0626 21:09:26.922161   54061 main.go:141] libmachine: (kindnet-606105)   <name>kindnet-606105</name>
	I0626 21:09:26.922173   54061 main.go:141] libmachine: (kindnet-606105)   <memory unit='MiB'>3072</memory>
	I0626 21:09:26.922183   54061 main.go:141] libmachine: (kindnet-606105)   <vcpu>2</vcpu>
	I0626 21:09:26.922192   54061 main.go:141] libmachine: (kindnet-606105)   <features>
	I0626 21:09:26.922204   54061 main.go:141] libmachine: (kindnet-606105)     <acpi/>
	I0626 21:09:26.922215   54061 main.go:141] libmachine: (kindnet-606105)     <apic/>
	I0626 21:09:26.922228   54061 main.go:141] libmachine: (kindnet-606105)     <pae/>
	I0626 21:09:26.922243   54061 main.go:141] libmachine: (kindnet-606105)     
	I0626 21:09:26.922256   54061 main.go:141] libmachine: (kindnet-606105)   </features>
	I0626 21:09:26.922274   54061 main.go:141] libmachine: (kindnet-606105)   <cpu mode='host-passthrough'>
	I0626 21:09:26.922309   54061 main.go:141] libmachine: (kindnet-606105)   
	I0626 21:09:26.922336   54061 main.go:141] libmachine: (kindnet-606105)   </cpu>
	I0626 21:09:26.922346   54061 main.go:141] libmachine: (kindnet-606105)   <os>
	I0626 21:09:26.922362   54061 main.go:141] libmachine: (kindnet-606105)     <type>hvm</type>
	I0626 21:09:26.922373   54061 main.go:141] libmachine: (kindnet-606105)     <boot dev='cdrom'/>
	I0626 21:09:26.922384   54061 main.go:141] libmachine: (kindnet-606105)     <boot dev='hd'/>
	I0626 21:09:26.922398   54061 main.go:141] libmachine: (kindnet-606105)     <bootmenu enable='no'/>
	I0626 21:09:26.922411   54061 main.go:141] libmachine: (kindnet-606105)   </os>
	I0626 21:09:26.922424   54061 main.go:141] libmachine: (kindnet-606105)   <devices>
	I0626 21:09:26.922437   54061 main.go:141] libmachine: (kindnet-606105)     <disk type='file' device='cdrom'>
	I0626 21:09:26.922451   54061 main.go:141] libmachine: (kindnet-606105)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105/boot2docker.iso'/>
	I0626 21:09:26.922464   54061 main.go:141] libmachine: (kindnet-606105)       <target dev='hdc' bus='scsi'/>
	I0626 21:09:26.922477   54061 main.go:141] libmachine: (kindnet-606105)       <readonly/>
	I0626 21:09:26.922492   54061 main.go:141] libmachine: (kindnet-606105)     </disk>
	I0626 21:09:26.922506   54061 main.go:141] libmachine: (kindnet-606105)     <disk type='file' device='disk'>
	I0626 21:09:26.922519   54061 main.go:141] libmachine: (kindnet-606105)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0626 21:09:26.922536   54061 main.go:141] libmachine: (kindnet-606105)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/kindnet-606105/kindnet-606105.rawdisk'/>
	I0626 21:09:26.922548   54061 main.go:141] libmachine: (kindnet-606105)       <target dev='hda' bus='virtio'/>
	I0626 21:09:26.922576   54061 main.go:141] libmachine: (kindnet-606105)     </disk>
	I0626 21:09:26.922599   54061 main.go:141] libmachine: (kindnet-606105)     <interface type='network'>
	I0626 21:09:26.922615   54061 main.go:141] libmachine: (kindnet-606105)       <source network='mk-kindnet-606105'/>
	I0626 21:09:26.922627   54061 main.go:141] libmachine: (kindnet-606105)       <model type='virtio'/>
	I0626 21:09:26.922641   54061 main.go:141] libmachine: (kindnet-606105)     </interface>
	I0626 21:09:26.922653   54061 main.go:141] libmachine: (kindnet-606105)     <interface type='network'>
	I0626 21:09:26.922676   54061 main.go:141] libmachine: (kindnet-606105)       <source network='default'/>
	I0626 21:09:26.922697   54061 main.go:141] libmachine: (kindnet-606105)       <model type='virtio'/>
	I0626 21:09:26.922709   54061 main.go:141] libmachine: (kindnet-606105)     </interface>
	I0626 21:09:26.922719   54061 main.go:141] libmachine: (kindnet-606105)     <serial type='pty'>
	I0626 21:09:26.922733   54061 main.go:141] libmachine: (kindnet-606105)       <target port='0'/>
	I0626 21:09:26.922743   54061 main.go:141] libmachine: (kindnet-606105)     </serial>
	I0626 21:09:26.922770   54061 main.go:141] libmachine: (kindnet-606105)     <console type='pty'>
	I0626 21:09:26.922791   54061 main.go:141] libmachine: (kindnet-606105)       <target type='serial' port='0'/>
	I0626 21:09:26.922801   54061 main.go:141] libmachine: (kindnet-606105)     </console>
	I0626 21:09:26.922810   54061 main.go:141] libmachine: (kindnet-606105)     <rng model='virtio'>
	I0626 21:09:26.922821   54061 main.go:141] libmachine: (kindnet-606105)       <backend model='random'>/dev/random</backend>
	I0626 21:09:26.922829   54061 main.go:141] libmachine: (kindnet-606105)     </rng>
	I0626 21:09:26.922838   54061 main.go:141] libmachine: (kindnet-606105)     
	I0626 21:09:26.922845   54061 main.go:141] libmachine: (kindnet-606105)     
	I0626 21:09:26.922854   54061 main.go:141] libmachine: (kindnet-606105)   </devices>
	I0626 21:09:26.922866   54061 main.go:141] libmachine: (kindnet-606105) </domain>
	I0626 21:09:26.922883   54061 main.go:141] libmachine: (kindnet-606105) 
	I0626 21:09:26.926831   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:4b:bf:a9 in network default
	I0626 21:09:26.927348   54061 main.go:141] libmachine: (kindnet-606105) Ensuring networks are active...
	I0626 21:09:26.927385   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:26.928113   54061 main.go:141] libmachine: (kindnet-606105) Ensuring network default is active
	I0626 21:09:26.928438   54061 main.go:141] libmachine: (kindnet-606105) Ensuring network mk-kindnet-606105 is active
	I0626 21:09:26.928954   54061 main.go:141] libmachine: (kindnet-606105) Getting domain xml...
	I0626 21:09:26.929750   54061 main.go:141] libmachine: (kindnet-606105) Creating domain...
	I0626 21:09:24.797226   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:24.797716   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has current primary IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:24.797739   53649 main.go:141] libmachine: (auto-606105) Found IP for machine: 192.168.72.66
	I0626 21:09:24.797754   53649 main.go:141] libmachine: (auto-606105) Reserving static IP address...
	I0626 21:09:24.798154   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find host DHCP lease matching {name: "auto-606105", mac: "52:54:00:94:12:df", ip: "192.168.72.66"} in network mk-auto-606105
	I0626 21:09:24.870609   53649 main.go:141] libmachine: (auto-606105) DBG | Getting to WaitForSSH function...
	I0626 21:09:24.870636   53649 main.go:141] libmachine: (auto-606105) Reserved static IP address: 192.168.72.66
	I0626 21:09:24.870649   53649 main.go:141] libmachine: (auto-606105) Waiting for SSH to be available...
	I0626 21:09:24.873624   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:24.874155   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:minikube Clientid:01:52:54:00:94:12:df}
	I0626 21:09:24.874189   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:24.874360   53649 main.go:141] libmachine: (auto-606105) DBG | Using SSH client type: external
	I0626 21:09:24.874390   53649 main.go:141] libmachine: (auto-606105) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa (-rw-------)
	I0626 21:09:24.874420   53649 main.go:141] libmachine: (auto-606105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 21:09:24.874435   53649 main.go:141] libmachine: (auto-606105) DBG | About to run SSH command:
	I0626 21:09:24.874450   53649 main.go:141] libmachine: (auto-606105) DBG | exit 0
	I0626 21:09:24.977280   53649 main.go:141] libmachine: (auto-606105) DBG | SSH cmd err, output: <nil>: 
	I0626 21:09:24.977629   53649 main.go:141] libmachine: (auto-606105) KVM machine creation complete!
	I0626 21:09:24.977923   53649 main.go:141] libmachine: (auto-606105) Calling .GetConfigRaw
	I0626 21:09:24.978527   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:24.978737   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:24.978938   53649 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0626 21:09:24.978954   53649 main.go:141] libmachine: (auto-606105) Calling .GetState
	I0626 21:09:24.980256   53649 main.go:141] libmachine: Detecting operating system of created instance...
	I0626 21:09:24.980269   53649 main.go:141] libmachine: Waiting for SSH to be available...
	I0626 21:09:24.980276   53649 main.go:141] libmachine: Getting to WaitForSSH function...
	I0626 21:09:24.980283   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:24.982550   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:24.982904   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:24.982934   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:24.983103   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:24.983272   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:24.983386   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:24.983489   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:24.983656   53649 main.go:141] libmachine: Using SSH client type: native
	I0626 21:09:24.984059   53649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0626 21:09:24.984071   53649 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0626 21:09:25.112979   53649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 21:09:25.113011   53649 main.go:141] libmachine: Detecting the provisioner...
	I0626 21:09:25.113023   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:25.115825   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.116164   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.116224   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.116315   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:25.116586   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.116779   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.116973   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:25.117128   53649 main.go:141] libmachine: Using SSH client type: native
	I0626 21:09:25.117560   53649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0626 21:09:25.117577   53649 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0626 21:09:25.242133   53649 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-ge2e95ab-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0626 21:09:25.242199   53649 main.go:141] libmachine: found compatible host: buildroot
	I0626 21:09:25.242214   53649 main.go:141] libmachine: Provisioning with buildroot...
	I0626 21:09:25.242225   53649 main.go:141] libmachine: (auto-606105) Calling .GetMachineName
	I0626 21:09:25.242459   53649 buildroot.go:166] provisioning hostname "auto-606105"
	I0626 21:09:25.242494   53649 main.go:141] libmachine: (auto-606105) Calling .GetMachineName
	I0626 21:09:25.242685   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:25.245465   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.245901   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.245927   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.246112   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:25.246292   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.246441   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.246562   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:25.246713   53649 main.go:141] libmachine: Using SSH client type: native
	I0626 21:09:25.247103   53649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0626 21:09:25.247122   53649 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-606105 && echo "auto-606105" | sudo tee /etc/hostname
	I0626 21:09:25.386604   53649 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-606105
	
	I0626 21:09:25.386653   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:25.389279   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.389641   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.389665   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.389890   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:25.390069   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.390236   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.390374   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:25.390509   53649 main.go:141] libmachine: Using SSH client type: native
	I0626 21:09:25.391069   53649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0626 21:09:25.391094   53649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-606105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-606105/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-606105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 21:09:25.523059   53649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 21:09:25.523093   53649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 21:09:25.523118   53649 buildroot.go:174] setting up certificates
	I0626 21:09:25.523128   53649 provision.go:83] configureAuth start
	I0626 21:09:25.523137   53649 main.go:141] libmachine: (auto-606105) Calling .GetMachineName
	I0626 21:09:25.523435   53649 main.go:141] libmachine: (auto-606105) Calling .GetIP
	I0626 21:09:25.525939   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.526264   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.526288   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.526477   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:25.528664   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.528971   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.529006   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.529084   53649 provision.go:138] copyHostCerts
	I0626 21:09:25.529152   53649 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 21:09:25.529164   53649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 21:09:25.529237   53649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 21:09:25.529369   53649 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 21:09:25.529398   53649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 21:09:25.529438   53649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 21:09:25.529543   53649 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 21:09:25.529553   53649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 21:09:25.529581   53649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 21:09:25.529639   53649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.auto-606105 san=[192.168.72.66 192.168.72.66 localhost 127.0.0.1 minikube auto-606105]
	I0626 21:09:25.769839   53649 provision.go:172] copyRemoteCerts
	I0626 21:09:25.769892   53649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 21:09:25.769919   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:25.772615   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.772937   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.772979   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.773165   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:25.773364   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.773553   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:25.773708   53649 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa Username:docker}
	I0626 21:09:25.866402   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 21:09:25.891356   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0626 21:09:25.914891   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 21:09:25.938061   53649 provision.go:86] duration metric: configureAuth took 414.905222ms
	I0626 21:09:25.938086   53649 buildroot.go:189] setting minikube options for container-runtime
	I0626 21:09:25.938254   53649 config.go:182] Loaded profile config "auto-606105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:25.938387   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:25.940859   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.941187   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:25.941217   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:25.941360   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:25.941564   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.941738   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:25.941882   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:25.942040   53649 main.go:141] libmachine: Using SSH client type: native
	I0626 21:09:25.942483   53649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0626 21:09:25.942507   53649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 21:09:26.264858   53649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 21:09:26.264888   53649 main.go:141] libmachine: Checking connection to Docker...
	I0626 21:09:26.264900   53649 main.go:141] libmachine: (auto-606105) Calling .GetURL
	I0626 21:09:26.266006   53649 main.go:141] libmachine: (auto-606105) DBG | Using libvirt version 6000000
	I0626 21:09:26.267953   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.268203   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.268223   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.268401   53649 main.go:141] libmachine: Docker is up and running!
	I0626 21:09:26.268420   53649 main.go:141] libmachine: Reticulating splines...
	I0626 21:09:26.268428   53649 client.go:171] LocalClient.Create took 23.188354478s
	I0626 21:09:26.268452   53649 start.go:167] duration metric: libmachine.API.Create for "auto-606105" took 23.188423359s
	I0626 21:09:26.268465   53649 start.go:300] post-start starting for "auto-606105" (driver="kvm2")
	I0626 21:09:26.268476   53649 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 21:09:26.268507   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:26.268834   53649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 21:09:26.268873   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:26.270831   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.271080   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.271112   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.271234   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:26.271430   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:26.271587   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:26.271745   53649 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa Username:docker}
	I0626 21:09:26.362699   53649 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 21:09:26.367167   53649 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 21:09:26.367194   53649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 21:09:26.367264   53649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 21:09:26.367355   53649 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 21:09:26.367462   53649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 21:09:26.375564   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 21:09:26.399901   53649 start.go:303] post-start completed in 131.423437ms
	I0626 21:09:26.399947   53649 main.go:141] libmachine: (auto-606105) Calling .GetConfigRaw
	I0626 21:09:26.400553   53649 main.go:141] libmachine: (auto-606105) Calling .GetIP
	I0626 21:09:26.402677   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.403070   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.403108   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.403299   53649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/config.json ...
	I0626 21:09:26.403484   53649 start.go:128] duration metric: createHost completed in 23.342368515s
	I0626 21:09:26.403510   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:26.405539   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.405871   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.405901   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.405976   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:26.406160   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:26.406343   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:26.406494   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:26.406635   53649 main.go:141] libmachine: Using SSH client type: native
	I0626 21:09:26.407010   53649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.66 22 <nil> <nil>}
	I0626 21:09:26.407022   53649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 21:09:26.530383   53649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687813766.510747278
	
	I0626 21:09:26.530402   53649 fix.go:206] guest clock: 1687813766.510747278
	I0626 21:09:26.530414   53649 fix.go:219] Guest: 2023-06-26 21:09:26.510747278 +0000 UTC Remote: 2023-06-26 21:09:26.403496267 +0000 UTC m=+23.448449716 (delta=107.251011ms)
	I0626 21:09:26.530436   53649 fix.go:190] guest clock delta is within tolerance: 107.251011ms
	I0626 21:09:26.530443   53649 start.go:83] releasing machines lock for "auto-606105", held for 23.469415724s
	I0626 21:09:26.530474   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:26.530768   53649 main.go:141] libmachine: (auto-606105) Calling .GetIP
	I0626 21:09:26.533460   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.533846   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.533881   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.534120   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:26.534607   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:26.534805   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:26.534893   53649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 21:09:26.534940   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:26.535016   53649 ssh_runner.go:195] Run: cat /version.json
	I0626 21:09:26.535042   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHHostname
	I0626 21:09:26.537490   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.537644   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.537873   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.537903   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.538028   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:26.538046   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:26.538064   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:26.538213   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHPort
	I0626 21:09:26.538385   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:26.538404   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHKeyPath
	I0626 21:09:26.538510   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:26.538560   53649 main.go:141] libmachine: (auto-606105) Calling .GetSSHUsername
	I0626 21:09:26.538633   53649 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa Username:docker}
	I0626 21:09:26.538684   53649 sshutil.go:53] new ssh client: &{IP:192.168.72.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa Username:docker}
	I0626 21:09:26.662593   53649 ssh_runner.go:195] Run: systemctl --version
	I0626 21:09:26.668551   53649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 21:09:26.828663   53649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 21:09:26.836126   53649 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 21:09:26.836197   53649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 21:09:26.851529   53649 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 21:09:26.851549   53649 start.go:466] detecting cgroup driver to use...
	I0626 21:09:26.851614   53649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 21:09:26.870343   53649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 21:09:26.882068   53649 docker.go:196] disabling cri-docker service (if available) ...
	I0626 21:09:26.882132   53649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 21:09:26.893917   53649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 21:09:26.907770   53649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 21:09:27.007553   53649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 21:09:27.121248   53649 docker.go:212] disabling docker service ...
	I0626 21:09:27.121319   53649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 21:09:27.135292   53649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 21:09:27.146868   53649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 21:09:27.260846   53649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 21:09:27.391112   53649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 21:09:27.404481   53649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 21:09:27.422521   53649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 21:09:27.422579   53649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 21:09:27.432548   53649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 21:09:27.432614   53649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 21:09:27.442462   53649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 21:09:27.451624   53649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 21:09:27.460753   53649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 21:09:27.469697   53649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 21:09:27.477251   53649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 21:09:27.477309   53649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 21:09:27.489542   53649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 21:09:27.498472   53649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 21:09:27.612687   53649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 21:09:27.796927   53649 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 21:09:27.797008   53649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 21:09:27.802583   53649 start.go:534] Will wait 60s for crictl version
	I0626 21:09:27.802638   53649 ssh_runner.go:195] Run: which crictl
	I0626 21:09:27.806530   53649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 21:09:27.840153   53649 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 21:09:27.840228   53649 ssh_runner.go:195] Run: crio --version
	I0626 21:09:27.897245   53649 ssh_runner.go:195] Run: crio --version
	I0626 21:09:27.952219   53649 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 21:09:27.953494   53649 main.go:141] libmachine: (auto-606105) Calling .GetIP
	I0626 21:09:27.956813   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:27.957200   53649 main.go:141] libmachine: (auto-606105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:12:df", ip: ""} in network mk-auto-606105: {Iface:virbr3 ExpiryTime:2023-06-26 22:09:18 +0000 UTC Type:0 Mac:52:54:00:94:12:df Iaid: IPaddr:192.168.72.66 Prefix:24 Hostname:auto-606105 Clientid:01:52:54:00:94:12:df}
	I0626 21:09:27.957234   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined IP address 192.168.72.66 and MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:27.957467   53649 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0626 21:09:27.962041   53649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 21:09:27.976518   53649 localpath.go:92] copying /home/jenkins/minikube-integration/16761-7242/.minikube/client.crt -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/client.crt
	I0626 21:09:27.976674   53649 localpath.go:117] copying /home/jenkins/minikube-integration/16761-7242/.minikube/client.key -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/client.key
	I0626 21:09:27.976795   53649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 21:09:27.976862   53649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 21:09:25.126027   54312 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 21:09:25.126063   54312 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 21:09:25.126087   54312 cache.go:57] Caching tarball of preloaded images
	I0626 21:09:25.126163   54312 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 21:09:25.126180   54312 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 21:09:25.126279   54312 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/calico-606105/config.json ...
	I0626 21:09:25.126295   54312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/calico-606105/config.json: {Name:mkd6e88df4660d8c593661a80cedbafbf2c04ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:25.126436   54312 start.go:365] acquiring machines lock for calico-606105: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 21:09:28.208300   54061 main.go:141] libmachine: (kindnet-606105) Waiting to get IP...
	I0626 21:09:28.209267   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:28.209709   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:28.209746   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:28.209713   54334 retry.go:31] will retry after 278.5494ms: waiting for machine to come up
	I0626 21:09:28.490384   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:28.490946   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:28.490978   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:28.490911   54334 retry.go:31] will retry after 274.355367ms: waiting for machine to come up
	I0626 21:09:28.767689   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:28.768487   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:28.768515   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:28.768442   54334 retry.go:31] will retry after 446.362903ms: waiting for machine to come up
	I0626 21:09:29.216056   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:29.216533   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:29.216563   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:29.216498   54334 retry.go:31] will retry after 583.920736ms: waiting for machine to come up
	I0626 21:09:29.802337   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:29.802788   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:29.802816   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:29.802747   54334 retry.go:31] will retry after 624.573513ms: waiting for machine to come up
	I0626 21:09:30.428485   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:30.428896   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:30.428935   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:30.428833   54334 retry.go:31] will retry after 873.481267ms: waiting for machine to come up
	I0626 21:09:31.303515   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:31.304045   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:31.304077   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:31.303987   54334 retry.go:31] will retry after 808.991629ms: waiting for machine to come up
	I0626 21:09:28.013439   53649 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 21:09:28.013515   53649 ssh_runner.go:195] Run: which lz4
	I0626 21:09:28.018018   53649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 21:09:28.022912   53649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 21:09:28.022944   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 21:09:29.919094   53649 crio.go:444] Took 1.901121 seconds to copy over tarball
	I0626 21:09:29.919159   53649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 21:09:33.065433   53649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.146242096s)
	I0626 21:09:33.065461   53649 crio.go:451] Took 3.146341 seconds to extract the tarball
	I0626 21:09:33.065472   53649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 21:09:33.112230   53649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 21:09:33.164639   53649 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 21:09:33.164663   53649 cache_images.go:84] Images are preloaded, skipping loading
	I0626 21:09:33.164734   53649 ssh_runner.go:195] Run: crio config
	I0626 21:09:33.236182   53649 cni.go:84] Creating CNI manager for ""
	I0626 21:09:33.236210   53649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 21:09:33.236223   53649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 21:09:33.236246   53649 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.66 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-606105 NodeName:auto-606105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 21:09:33.236423   53649 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-606105"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 21:09:33.236500   53649 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-606105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:auto-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 21:09:33.236563   53649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 21:09:33.246039   53649 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 21:09:33.246104   53649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 21:09:33.254295   53649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0626 21:09:33.271284   53649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 21:09:33.288163   53649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I0626 21:09:33.304092   53649 ssh_runner.go:195] Run: grep 192.168.72.66	control-plane.minikube.internal$ /etc/hosts
	I0626 21:09:33.307954   53649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 21:09:33.319723   53649 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105 for IP: 192.168.72.66
	I0626 21:09:33.319749   53649 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:33.319900   53649 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 21:09:33.319957   53649 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 21:09:33.320053   53649 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/client.key
	I0626 21:09:33.320081   53649 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.key.566ab58b
	I0626 21:09:33.320096   53649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.crt.566ab58b with IP's: [192.168.72.66 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 21:09:33.538574   53649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.crt.566ab58b ...
	I0626 21:09:33.538599   53649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.crt.566ab58b: {Name:mkae8352f8dfd193b091587c7b2e189a9602ea3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:33.538771   53649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.key.566ab58b ...
	I0626 21:09:33.538787   53649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.key.566ab58b: {Name:mk1e94667d33a3e1d47e5556d3b50b548e2d44ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:33.538883   53649 certs.go:337] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.crt.566ab58b -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.crt
	I0626 21:09:33.538950   53649 certs.go:341] copying /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.key.566ab58b -> /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.key
	I0626 21:09:33.539000   53649 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.key
	I0626 21:09:33.539013   53649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.crt with IP's: []
	I0626 21:09:33.681295   53649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.crt ...
	I0626 21:09:33.681321   53649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.crt: {Name:mk9e2890302b7cfb27ebfcf12faa5558b3479f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:33.681503   53649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.key ...
	I0626 21:09:33.681521   53649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.key: {Name:mkd822165a9a784d336227ab696e791d95d25b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:33.681719   53649 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 21:09:33.681763   53649 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 21:09:33.681778   53649 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 21:09:33.681812   53649 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 21:09:33.681843   53649 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 21:09:33.681877   53649 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 21:09:33.681931   53649 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 21:09:33.682451   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 21:09:33.706692   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0626 21:09:33.729957   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 21:09:33.755957   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 21:09:33.779837   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 21:09:33.802566   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 21:09:33.826122   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 21:09:33.848826   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 21:09:33.871757   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 21:09:33.897420   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 21:09:33.926210   53649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 21:09:33.955109   53649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 21:09:33.972118   53649 ssh_runner.go:195] Run: openssl version
	I0626 21:09:33.977343   53649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 21:09:33.989985   53649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 21:09:33.995660   53649 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 21:09:33.995713   53649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 21:09:34.002713   53649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 21:09:34.012451   53649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 21:09:34.024926   53649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 21:09:34.030963   53649 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 21:09:34.031017   53649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 21:09:34.038027   53649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 21:09:34.047667   53649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 21:09:34.060126   53649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 21:09:34.064740   53649 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 21:09:34.064796   53649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 21:09:34.072030   53649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 21:09:34.081905   53649 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 21:09:34.086075   53649 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 21:09:34.086138   53649 kubeadm.go:404] StartCluster: {Name:auto-606105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
7.3 ClusterName:auto-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.66 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 21:09:34.086232   53649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 21:09:34.086279   53649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 21:09:34.117011   53649 cri.go:89] found id: ""
	I0626 21:09:34.117076   53649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 21:09:34.126018   53649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 21:09:34.134459   53649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 21:09:34.143564   53649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 21:09:34.143614   53649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 21:09:34.199698   53649 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 21:09:34.199818   53649 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 21:09:34.344682   53649 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 21:09:34.344829   53649 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 21:09:34.344959   53649 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 21:09:34.529153   53649 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 21:09:32.114390   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:32.114879   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:32.114912   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:32.114835   54334 retry.go:31] will retry after 1.09777186s: waiting for machine to come up
	I0626 21:09:33.214130   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:33.214547   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:33.214582   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:33.214505   54334 retry.go:31] will retry after 1.279820556s: waiting for machine to come up
	I0626 21:09:34.495710   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:34.496250   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:34.496280   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:34.496196   54334 retry.go:31] will retry after 2.025917643s: waiting for machine to come up
	I0626 21:09:36.523999   54061 main.go:141] libmachine: (kindnet-606105) DBG | domain kindnet-606105 has defined MAC address 52:54:00:b3:24:85 in network mk-kindnet-606105
	I0626 21:09:36.524529   54061 main.go:141] libmachine: (kindnet-606105) DBG | unable to find current IP address of domain kindnet-606105 in network mk-kindnet-606105
	I0626 21:09:36.524561   54061 main.go:141] libmachine: (kindnet-606105) DBG | I0626 21:09:36.524471   54334 retry.go:31] will retry after 2.1699572s: waiting for machine to come up
	I0626 21:09:34.716267   53649 out.go:204]   - Generating certificates and keys ...
	I0626 21:09:34.716471   53649 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 21:09:34.716570   53649 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 21:09:34.716667   53649 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 21:09:34.808685   53649 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 21:09:35.152797   53649 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 21:09:35.201297   53649 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 21:09:35.286835   53649 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 21:09:35.287205   53649 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-606105 localhost] and IPs [192.168.72.66 127.0.0.1 ::1]
	I0626 21:09:35.530580   53649 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 21:09:35.531129   53649 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-606105 localhost] and IPs [192.168.72.66 127.0.0.1 ::1]
	I0626 21:09:35.728590   53649 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 21:09:35.813261   53649 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 21:09:36.019301   53649 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 21:09:36.019796   53649 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 21:09:36.602404   53649 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 21:09:37.039740   53649 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 21:09:37.228406   53649 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 21:09:37.292187   53649 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 21:09:37.310758   53649 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 21:09:37.311953   53649 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 21:09:37.312010   53649 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 21:09:37.435103   53649 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 21:09:37.437896   53649 out.go:204]   - Booting up control plane ...
	I0626 21:09:37.437990   53649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 21:09:37.439379   53649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 21:09:37.440751   53649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 21:09:37.444369   53649 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 21:09:37.449430   53649 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:47:04 UTC, ends at Mon 2023-06-26 21:09:39 UTC. --
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.926264626Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef1db85fdcee2758742fb580c90ab8335f7580be3d8642ec686bbede93f6aa02,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-8qcw9,Uid:b81a167a-fb12-4a9c-89ae-93ff6474dc30,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812769991741162,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-8qcw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81a167a-fb12-4a9c-89ae-93ff6474dc30,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:52:49.603008392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0ff5c6fb-2917-4a8a-a33a-20631ff
9fc1f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812769541363664,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\
",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-06-26T20:52:49.205044313Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-q7zms,Uid:86e16893-4f35-4d11-8346-81fee8cb607a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812766618987468,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:52:46.292309633Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&PodSandboxMetadata{Name:kube-proxy-k4hzc,Uid:036703e4-59a2-4be1-
84ad-621e52766052,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812766521488724,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-06-26T20:52:46.143474133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-473235,Uid:656421cf3ef137c1ff662f6e765c58ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812744323810065,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 656421cf3ef137c1ff662f6e765c58ea,kubernetes.io/config.seen: 2023-06-26T20:52:23.784784586Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-473235,Uid:1189a128aff0a949bc1bfa3ad7e57b22,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812744319265667,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc1bfa3ad7e57b22,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.238:2379,kubernetes.io/config.hash: 1189a128aff0a949bc1bfa3ad7e57b22,kubernetes.io/config.seen: 2023-06-26T20:52:23.78
4782383Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-473235,Uid:d386f1ebb7cb61ab5106c2778267349e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812744287441353,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.238:8444,kubernetes.io/config.hash: d386f1ebb7cb61ab5106c2778267349e,kubernetes.io/config.seen: 2023-06-26T20:52:23.784783686Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&PodSandboxMetadata{Name:kube-sche
duler-default-k8s-diff-port-473235,Uid:b9f145a20d99ab0853fba01701760a25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1687812744270890230,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b9f145a20d99ab0853fba01701760a25,kubernetes.io/config.seen: 2023-06-26T20:52:23.784778695Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=7695d37e-bcba-48cc-afdd-18f2d4f55664 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.926989918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=070406af-7d2a-44f0-a3a7-0fb7a0d7370f name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.927095050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=070406af-7d2a-44f0-a3a7-0fb7a0d7370f name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.927391728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=070406af-7d2a-44f0-a3a7-0fb7a0d7370f name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.969454673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=262ed8c1-66c7-4cc9-babd-d38df156a93a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.969581670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=262ed8c1-66c7-4cc9-babd-d38df156a93a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:38 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:38.969852211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=262ed8c1-66c7-4cc9-babd-d38df156a93a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.015666952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4074e610-16da-407d-8767-aec5ed5486e1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.015806442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4074e610-16da-407d-8767-aec5ed5486e1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.016079945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4074e610-16da-407d-8767-aec5ed5486e1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.067095001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=41f7a2f9-23a9-4701-b140-fdf0214bacb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.067295612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=41f7a2f9-23a9-4701-b140-fdf0214bacb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.067744898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=41f7a2f9-23a9-4701-b140-fdf0214bacb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.105664712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3e528aa-82c0-4df8-80a1-5872e8f5b4ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.105816028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3e528aa-82c0-4df8-80a1-5872e8f5b4ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.106023449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3e528aa-82c0-4df8-80a1-5872e8f5b4ba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.141415039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=71a1e5f1-14a2-4ee1-9233-a17adf99626b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.141477537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=71a1e5f1-14a2-4ee1-9233-a17adf99626b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.141622212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=71a1e5f1-14a2-4ee1-9233-a17adf99626b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.178524777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e1305a5-dab9-496b-afce-79844bd815f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.178588801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e1305a5-dab9-496b-afce-79844bd815f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.178741143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e1305a5-dab9-496b-afce-79844bd815f3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.213821247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bc8cae09-3013-4d42-be45-cec5e5786bd8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.213885083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bc8cae09-3013-4d42-be45-cec5e5786bd8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:39 default-k8s-diff-port-473235 crio[726]: time="2023-06-26 21:09:39.214024629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d,PodSandboxId:5aa916845b00373694521c35ca744d53c1d36369ead159a6e81914a681bd4b7e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812770501120167,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 152de092,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33,PodSandboxId:ba27bdfc9d888375bac9b77ddb6e631eaff8dec00c68488ddecaead3c18b7995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812770373727467,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4hzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036703e4-59a2-4be1-84ad-621e52766052,},Annotations:map[string]string{io.kubernetes.container.hash: 26e2eaf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6,PodSandboxId:0175a496a170417c30eea492f28d22a80122acea8dfcc7920cff355a5fbdaa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812769156904487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-q7zms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86e16893-4f35-4d11-8346-81fee8cb607a,},Annotations:map[string]string{io.kubernetes.container.hash: e83624bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289,PodSandboxId:9252b6099b201887b4160e3d032e4b741da6554880fb7c7434148f7ecdf62b75,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812745751790520,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b9f145a20d99ab0853fba01701760a25,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854,PodSandboxId:a43553760c796763945193fe478b10cbe27003ebd2a1d2c78e5951a5373abc2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812745633512873,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1189a128aff0a949bc
1bfa3ad7e57b22,},Annotations:map[string]string{io.kubernetes.container.hash: b6aa293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f,PodSandboxId:e3b844a6be5da0cde3e4796110c629bf3ae540d68671b628bcacebdd538471db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812745030324979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 656421cf3ef137c1ff662f6e765c58ea,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4,PodSandboxId:7ab914de47588b9d19e13a0e6877349f47bdef88cc845bdba21db47391cc59ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812744825100184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-473235,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d386f1ebb7cb61ab5106c2778267349e,},Annotations:map[string]string{io.kubernetes.container.hash: 60fd8fcc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bc8cae09-3013-4d42-be45-cec5e5786bd8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	42f5349c90125       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   5aa916845b003
	c96344f29939b       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   16 minutes ago      Running             kube-proxy                0                   ba27bdfc9d888
	6a2b730696b42       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   0175a496a1704
	ac747b676e948       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   17 minutes ago      Running             kube-scheduler            2                   9252b6099b201
	27d078cc8ea69       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   17 minutes ago      Running             etcd                      2                   a43553760c796
	5e21f96f0cb7d       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   17 minutes ago      Running             kube-controller-manager   2                   e3b844a6be5da
	5903c5fd077ea       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   17 minutes ago      Running             kube-apiserver            2                   7ab914de47588
	
	* 
	* ==> coredns [6a2b730696b42ce756548fcc7db66a2cf49421fd43963649c0a3dc27910eaab6] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55036 - 5397 "HINFO IN 5672118673736255248.4507566907500056261. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03850101s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-473235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-473235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=default-k8s-diff-port-473235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-473235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 21:09:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:08:12 +0000   Mon, 26 Jun 2023 20:52:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:08:12 +0000   Mon, 26 Jun 2023 20:52:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:08:12 +0000   Mon, 26 Jun 2023 20:52:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:08:12 +0000   Mon, 26 Jun 2023 20:52:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.238
	  Hostname:    default-k8s-diff-port-473235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 92faed464cce40ff8645cc065dd0c89b
	  System UUID:                92faed46-4cce-40ff-8645-cc065dd0c89b
	  Boot ID:                    3de82d6f-cc55-451b-9343-bc4f633f6654
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-q7zms                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-473235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-473235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-473235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-k4hzc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-473235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-74d5c6b9c-8qcw9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-473235 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17m                kubelet          Node default-k8s-diff-port-473235 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-473235 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-473235 event: Registered Node default-k8s-diff-port-473235 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun26 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073640] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jun26 20:47] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.236869] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151817] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.505159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.262372] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.116817] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.154347] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.127880] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.289153] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +17.851418] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +19.057076] kauditd_printk_skb: 29 callbacks suppressed
	[Jun26 20:52] systemd-fstab-generator[3546]: Ignoring "noauto" for root device
	[ +10.858660] systemd-fstab-generator[3874]: Ignoring "noauto" for root device
	[ +21.768540] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [27d078cc8ea69b502cfa5e2d64d4186d21d7f9226e50902d929bc4aa63ff3854] <==
	* {"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"30634cbf5a4943f7","local-member-id":"d6c736ad0f9c7068","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.157Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:28.158Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.238:2379"}
	{"level":"info","ts":"2023-06-26T21:02:28.601Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":728}
	{"level":"info","ts":"2023-06-26T21:02:28.604Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":728,"took":"2.234663ms","hash":572841209}
	{"level":"info","ts":"2023-06-26T21:02:28.604Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":572841209,"revision":728,"compact-revision":-1}
	{"level":"warn","ts":"2023-06-26T21:07:22.102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.877711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2023-06-26T21:07:22.103Z","caller":"traceutil/trace.go:171","msg":"trace[2118971896] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1208; }","duration":"103.566239ms","start":"2023-06-26T21:07:21.999Z","end":"2023-06-26T21:07:22.103Z","steps":["trace[2118971896] 'range keys from in-memory index tree'  (duration: 102.715305ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:07:28.062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.336066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-06-26T21:07:28.062Z","caller":"traceutil/trace.go:171","msg":"trace[402611706] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1212; }","duration":"100.457591ms","start":"2023-06-26T21:07:27.962Z","end":"2023-06-26T21:07:28.062Z","steps":["trace[402611706] 'count revisions from in-memory index tree'  (duration: 100.244793ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T21:07:28.613Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":971}
	{"level":"info","ts":"2023-06-26T21:07:28.616Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":971,"took":"1.756566ms","hash":3078367479}
	{"level":"info","ts":"2023-06-26T21:07:28.616Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3078367479,"revision":971,"compact-revision":728}
	{"level":"warn","ts":"2023-06-26T21:08:35.583Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8099874534912454432,"retry-timeout":"500ms"}
	{"level":"info","ts":"2023-06-26T21:08:35.689Z","caller":"traceutil/trace.go:171","msg":"trace[425803790] linearizableReadLoop","detail":"{readStateIndex:1477; appliedIndex:1476; }","duration":"605.759251ms","start":"2023-06-26T21:08:35.083Z","end":"2023-06-26T21:08:35.689Z","steps":["trace[425803790] 'read index received'  (duration: 605.627804ms)","trace[425803790] 'applied index is now lower than readState.Index'  (duration: 131.024µs)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T21:08:35.689Z","caller":"traceutil/trace.go:171","msg":"trace[541027124] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"943.682829ms","start":"2023-06-26T21:08:34.745Z","end":"2023-06-26T21:08:35.689Z","steps":["trace[541027124] 'process raft request'  (duration: 943.079286ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:35.689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.579738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:08:35.689Z","caller":"traceutil/trace.go:171","msg":"trace[622535274] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1270; }","duration":"192.750263ms","start":"2023-06-26T21:08:35.497Z","end":"2023-06-26T21:08:35.689Z","steps":["trace[622535274] 'agreement among raft nodes before linearized reading'  (duration: 192.382958ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:35.690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"607.038261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-06-26T21:08:35.690Z","caller":"traceutil/trace.go:171","msg":"trace[921818459] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1270; }","duration":"607.111174ms","start":"2023-06-26T21:08:35.083Z","end":"2023-06-26T21:08:35.690Z","steps":["trace[921818459] 'agreement among raft nodes before linearized reading'  (duration: 606.967732ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:35.690Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:08:35.083Z","time spent":"607.181709ms","remote":"127.0.0.1:43852","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":13,"response size":30,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true "}
	{"level":"warn","ts":"2023-06-26T21:08:35.690Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:08:34.745Z","time spent":"943.791325ms","remote":"127.0.0.1:43810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1268 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-06-26T21:09:34.171Z","caller":"traceutil/trace.go:171","msg":"trace[525799764] transaction","detail":"{read_only:false; response_revision:1318; number_of_response:1; }","duration":"107.16215ms","start":"2023-06-26T21:09:34.064Z","end":"2023-06-26T21:09:34.171Z","steps":["trace[525799764] 'process raft request'  (duration: 106.953258ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:09:39 up 22 min,  0 users,  load average: 0.02, 0.21, 0.21
	Linux default-k8s-diff-port-473235 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5903c5fd077eaf73ac2b12554913116d25d09ca7481de080efd90cf1889693a4] <==
	* I0626 21:07:30.265067       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0626 21:07:30.378335       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 21:07:30.378424       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:07:31.378712       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:07:31.378784       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:07:31.378801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:07:31.378978       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:07:31.379211       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:07:31.380523       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:08:30.264740       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 21:08:30.264841       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:08:31.379945       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:08:31.380020       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:08:31.380039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:08:31.381090       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:08:31.381337       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:08:31.381383       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:08:35.691541       1 trace.go:219] Trace[1999792776]: "Update" accept:application/json, */*,audit-id:74ca4a30-c8ca-4f63-aa13-1c48ab5f8bf4,client:192.168.61.238,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (26-Jun-2023 21:08:34.744) (total time: 947ms):
	Trace[1999792776]: ["GuaranteedUpdate etcd3" audit-id:74ca4a30-c8ca-4f63-aa13-1c48ab5f8bf4,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 946ms (21:08:34.744)
	Trace[1999792776]:  ---"Txn call completed" 945ms (21:08:35.691)]
	Trace[1999792776]: [947.235799ms] [947.235799ms] END
	I0626 21:09:30.265624       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.116.83:443: connect: connection refused
	I0626 21:09:30.265973       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [5e21f96f0cb7dbbb90dfe232bbaf0d53c3d195f7120308fb2d0bf72a8a503e1f] <==
	* W0626 21:03:16.027543       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:45.426569       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:46.035746       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:15.433378       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:16.044775       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:45.441228       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:46.056437       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:15.446845       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:16.076868       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:45.453655       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:46.087713       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:06:15.461854       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:06:16.096364       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:06:45.469889       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:06:46.107127       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:07:15.476945       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:07:16.120459       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:07:45.483945       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:07:46.129539       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:08:15.489399       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:08:16.138415       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:08:45.496116       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:08:46.151745       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:09:15.502303       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:09:16.166696       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [c96344f29939b3f0d3c9b0b75da865cdbd186bd13f427150c0da0c658f5b2b33] <==
	* I0626 20:52:51.077409       1 node.go:141] Successfully retrieved node IP: 192.168.61.238
	I0626 20:52:51.077580       1 server_others.go:110] "Detected node IP" address="192.168.61.238"
	I0626 20:52:51.077658       1 server_others.go:554] "Using iptables proxy"
	I0626 20:52:51.126534       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:52:51.126614       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:52:51.126975       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:52:51.128297       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:52:51.128368       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:52:51.130539       1 config.go:188] "Starting service config controller"
	I0626 20:52:51.130981       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:52:51.131327       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:52:51.131386       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:52:51.133313       1 config.go:315] "Starting node config controller"
	I0626 20:52:51.133578       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:52:51.232300       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:52:51.232331       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:52:51.234710       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ac747b676e9487cb4a0c216d1b215462466670edbf348d4a465c1bda10d61289] <==
	* W0626 20:52:31.356334       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.356452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.416591       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:52:31.416652       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 20:52:31.454908       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.455033       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.545196       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:31.545440       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:31.554745       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:52:31.554876       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 20:52:31.561066       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.561244       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.631993       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:52:31.632104       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:52:31.641555       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:52:31.641607       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 20:52:31.753372       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:52:31.753448       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:52:31.844530       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:31.844596       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:31.845701       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:52:31.846102       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 20:52:31.846631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:52:31.846654       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0626 20:52:33.597347       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:47:04 UTC, ends at Mon 2023-06-26 21:09:39 UTC. --
	Jun 26 21:07:34 default-k8s-diff-port-473235 kubelet[3881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:07:34 default-k8s-diff-port-473235 kubelet[3881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:07:34 default-k8s-diff-port-473235 kubelet[3881]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:07:34 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:07:34.478587    3881 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jun 26 21:07:42 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:07:42.305289    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:07:56 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:07:56.305647    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:08:08 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:08:08.306349    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:08:23 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:08:23.305321    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:08:34 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:08:34.403344    3881 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:08:34 default-k8s-diff-port-473235 kubelet[3881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:08:34 default-k8s-diff-port-473235 kubelet[3881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:08:34 default-k8s-diff-port-473235 kubelet[3881]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:08:36 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:08:36.305460    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:08:47 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:08:47.305523    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:09:00 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:00.313925    3881 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:09:00 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:00.314017    3881 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:09:00 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:00.314271    3881 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rvfz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-8qcw9_kube-system(b81a167a-fb12-4a9c-89ae-93ff6474dc30): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:09:00 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:00.314321    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:09:13 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:13.304890    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:09:26 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:26.306649    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	Jun 26 21:09:34 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:34.406358    3881 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:09:34 default-k8s-diff-port-473235 kubelet[3881]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:09:34 default-k8s-diff-port-473235 kubelet[3881]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:09:34 default-k8s-diff-port-473235 kubelet[3881]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:09:38 default-k8s-diff-port-473235 kubelet[3881]: E0626 21:09:38.309459    3881 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-8qcw9" podUID=b81a167a-fb12-4a9c-89ae-93ff6474dc30
	
	* 
	* ==> storage-provisioner [42f5349c90125dfe99d42d4294ec9650314ec169f5cc0d29afcf8f9449cb280d] <==
	* I0626 20:52:50.790103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:52:50.815820       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:52:50.816010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:52:50.838388       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:52:50.838589       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-473235_07c492cf-25c5-493d-8be5-4c418e941ceb!
	I0626 20:52:50.838651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea9a2fb3-bc39-4436-8db0-dda6b489ab3d", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-473235_07c492cf-25c5-493d-8be5-4c418e941ceb became leader
	I0626 20:52:50.940762       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-473235_07c492cf-25c5-493d-8be5-4c418e941ceb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-8qcw9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 describe pod metrics-server-74d5c6b9c-8qcw9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-473235 describe pod metrics-server-74d5c6b9c-8qcw9: exit status 1 (88.65726ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-8qcw9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-473235 describe pod metrics-server-74d5c6b9c-8qcw9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (463.96s)
E0626 21:12:34.790409   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (175.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0626 21:04:00.824035   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490377 -n old-k8s-version-490377
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:06:45.376035277 +0000 UTC m=+5469.876063109
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-490377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-490377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.546µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-490377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-490377 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-490377 logs -n 25: (1.544441613s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490377        | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-123924                              | stopped-upgrade-123924       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p running-upgrade-149180                              | running-upgrade-149180       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-686634                              | cert-expiration-686634       | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-603225 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:39 UTC |
	|         | disable-driver-mounts-603225                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:39 UTC | 26 Jun 23 20:41 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-934450             | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC | 26 Jun 23 20:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490377             | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 20:44:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 20:44:35.222921   47779 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:44:35.223059   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223070   47779 out.go:309] Setting ErrFile to fd 2...
	I0626 20:44:35.223074   47779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:44:35.223199   47779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:44:35.223797   47779 out.go:303] Setting JSON to false
	I0626 20:44:35.224674   47779 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5222,"bootTime":1687807053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:44:35.224734   47779 start.go:137] virtualization: kvm guest
	I0626 20:44:35.226901   47779 out.go:177] * [default-k8s-diff-port-473235] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:44:35.228842   47779 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:44:35.228804   47779 notify.go:220] Checking for updates...
	I0626 20:44:35.230224   47779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:44:35.231788   47779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:44:35.233239   47779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:44:35.234554   47779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:44:35.236823   47779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:44:35.238432   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:44:35.238825   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.238878   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.253669   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0626 20:44:35.254014   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.254589   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.254610   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.254907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.255090   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.255322   47779 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:44:35.255597   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:44:35.255627   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:44:35.269620   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0626 20:44:35.270027   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:44:35.270571   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:44:35.270599   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:44:35.270857   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:44:35.271037   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:44:35.302607   47779 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 20:44:35.303877   47779 start.go:297] selected driver: kvm2
	I0626 20:44:35.303889   47779 start.go:954] validating driver "kvm2" against &{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.303997   47779 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:44:35.304600   47779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.304681   47779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 20:44:35.319036   47779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 20:44:35.319459   47779 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 20:44:35.319499   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:44:35.319516   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:44:35.319532   47779 start_flags.go:319] config:
	{Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-47323
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:44:35.319725   47779 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 20:44:35.321690   47779 out.go:177] * Starting control plane node default-k8s-diff-port-473235 in cluster default-k8s-diff-port-473235
	I0626 20:44:33.713644   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:35.323076   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:44:35.323119   47779 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 20:44:35.323145   47779 cache.go:57] Caching tarball of preloaded images
	I0626 20:44:35.323245   47779 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 20:44:35.323260   47779 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 20:44:35.323385   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:44:35.323607   47779 start.go:365] acquiring machines lock for default-k8s-diff-port-473235: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:44:39.793629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:42.865602   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:48.945651   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:52.017646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:44:58.097650   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:01.169629   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:07.249647   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:10.321634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:16.401660   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:19.473641   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:25.553634   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:28.625721   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:34.705617   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:37.777753   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:43.857659   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:46.929661   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:53.009637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:45:56.081646   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:02.161637   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:05.233633   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:11.313640   46683 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.111:22: connect: no route to host
	I0626 20:46:14.317303   47309 start.go:369] acquired machines lock for "no-preload-934450" in 2m47.59820508s
	I0626 20:46:14.317355   47309 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:14.317388   47309 fix.go:54] fixHost starting: 
	I0626 20:46:14.317703   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:14.317733   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:14.331991   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0626 20:46:14.332358   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:14.332862   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:46:14.332888   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:14.333180   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:14.333368   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:14.333556   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:46:14.334930   47309 fix.go:102] recreateIfNeeded on no-preload-934450: state=Stopped err=<nil>
	I0626 20:46:14.334954   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	W0626 20:46:14.335122   47309 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:14.336692   47309 out.go:177] * Restarting existing kvm2 VM for "no-preload-934450" ...
	I0626 20:46:14.338056   47309 main.go:141] libmachine: (no-preload-934450) Calling .Start
	I0626 20:46:14.338201   47309 main.go:141] libmachine: (no-preload-934450) Ensuring networks are active...
	I0626 20:46:14.339255   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network default is active
	I0626 20:46:14.339575   47309 main.go:141] libmachine: (no-preload-934450) Ensuring network mk-no-preload-934450 is active
	I0626 20:46:14.339980   47309 main.go:141] libmachine: (no-preload-934450) Getting domain xml...
	I0626 20:46:14.340638   47309 main.go:141] libmachine: (no-preload-934450) Creating domain...
	I0626 20:46:15.550725   47309 main.go:141] libmachine: (no-preload-934450) Waiting to get IP...
	I0626 20:46:15.551641   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.552053   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.552125   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.552057   48070 retry.go:31] will retry after 285.629833ms: waiting for machine to come up
	I0626 20:46:15.839584   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:15.839950   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:15.839976   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:15.839920   48070 retry.go:31] will retry after 318.234269ms: waiting for machine to come up
	I0626 20:46:16.159361   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.159793   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.159823   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.159752   48070 retry.go:31] will retry after 486.280811ms: waiting for machine to come up
	I0626 20:46:14.315357   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:14.315401   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:46:14.317194   46683 machine.go:91] provisioned docker machine in 4m37.381545898s
	I0626 20:46:14.317230   46683 fix.go:56] fixHost completed within 4m37.403983922s
	I0626 20:46:14.317236   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 4m37.404002624s
	W0626 20:46:14.317252   46683 start.go:672] error starting host: provision: host is not running
	W0626 20:46:14.317326   46683 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0626 20:46:14.317333   46683 start.go:687] Will try again in 5 seconds ...
	I0626 20:46:16.647364   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:16.647777   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:16.647803   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:16.647721   48070 retry.go:31] will retry after 396.658606ms: waiting for machine to come up
	I0626 20:46:17.046604   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.047131   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.047156   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.047033   48070 retry.go:31] will retry after 741.382401ms: waiting for machine to come up
	I0626 20:46:17.789616   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:17.790035   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:17.790068   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:17.790014   48070 retry.go:31] will retry after 636.769895ms: waiting for machine to come up
	I0626 20:46:18.427899   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:18.428300   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:18.428326   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:18.428272   48070 retry.go:31] will retry after 869.736092ms: waiting for machine to come up
	I0626 20:46:19.299429   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:19.299742   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:19.299765   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:19.299717   48070 retry.go:31] will retry after 1.261709663s: waiting for machine to come up
	I0626 20:46:20.563421   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:20.563778   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:20.563807   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:20.563751   48070 retry.go:31] will retry after 1.280588584s: waiting for machine to come up
	I0626 20:46:19.318965   46683 start.go:365] acquiring machines lock for old-k8s-version-490377: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 20:46:21.846094   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:21.846530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:21.846557   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:21.846475   48070 retry.go:31] will retry after 1.542478163s: waiting for machine to come up
	I0626 20:46:23.391088   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:23.391530   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:23.391559   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:23.391474   48070 retry.go:31] will retry after 2.115450652s: waiting for machine to come up
	I0626 20:46:25.508447   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:25.508882   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:25.508915   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:25.508826   48070 retry.go:31] will retry after 3.403199971s: waiting for machine to come up
	I0626 20:46:28.916347   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:28.916756   47309 main.go:141] libmachine: (no-preload-934450) DBG | unable to find current IP address of domain no-preload-934450 in network mk-no-preload-934450
	I0626 20:46:28.916782   47309 main.go:141] libmachine: (no-preload-934450) DBG | I0626 20:46:28.916706   48070 retry.go:31] will retry after 3.011345508s: waiting for machine to come up
	I0626 20:46:33.094365   47605 start.go:369] acquired machines lock for "embed-certs-299839" in 2m23.878841424s
	I0626 20:46:33.094419   47605 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:33.094440   47605 fix.go:54] fixHost starting: 
	I0626 20:46:33.094827   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:33.094856   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:33.114045   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0626 20:46:33.114400   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:33.114927   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:46:33.114949   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:33.115244   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:33.115434   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:33.115573   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:46:33.116751   47605 fix.go:102] recreateIfNeeded on embed-certs-299839: state=Stopped err=<nil>
	I0626 20:46:33.116783   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	W0626 20:46:33.116944   47605 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:33.119904   47605 out.go:177] * Restarting existing kvm2 VM for "embed-certs-299839" ...
	I0626 20:46:33.121277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Start
	I0626 20:46:33.121442   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring networks are active...
	I0626 20:46:33.122062   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network default is active
	I0626 20:46:33.122397   47605 main.go:141] libmachine: (embed-certs-299839) Ensuring network mk-embed-certs-299839 is active
	I0626 20:46:33.122783   47605 main.go:141] libmachine: (embed-certs-299839) Getting domain xml...
	I0626 20:46:33.123400   47605 main.go:141] libmachine: (embed-certs-299839) Creating domain...
	I0626 20:46:31.930997   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931492   47309 main.go:141] libmachine: (no-preload-934450) Found IP for machine: 192.168.50.38
	I0626 20:46:31.931507   47309 main.go:141] libmachine: (no-preload-934450) Reserving static IP address...
	I0626 20:46:31.931524   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has current primary IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.931877   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.931901   47309 main.go:141] libmachine: (no-preload-934450) DBG | skip adding static IP to network mk-no-preload-934450 - found existing host DHCP lease matching {name: "no-preload-934450", mac: "52:54:00:cf:d3:cf", ip: "192.168.50.38"}
	I0626 20:46:31.931916   47309 main.go:141] libmachine: (no-preload-934450) Reserved static IP address: 192.168.50.38
	I0626 20:46:31.931928   47309 main.go:141] libmachine: (no-preload-934450) DBG | Getting to WaitForSSH function...
	I0626 20:46:31.931939   47309 main.go:141] libmachine: (no-preload-934450) Waiting for SSH to be available...
	I0626 20:46:31.934393   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934786   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:31.934814   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:31.934954   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH client type: external
	I0626 20:46:31.934971   47309 main.go:141] libmachine: (no-preload-934450) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa (-rw-------)
	I0626 20:46:31.935060   47309 main.go:141] libmachine: (no-preload-934450) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:31.935091   47309 main.go:141] libmachine: (no-preload-934450) DBG | About to run SSH command:
	I0626 20:46:31.935112   47309 main.go:141] libmachine: (no-preload-934450) DBG | exit 0
	I0626 20:46:32.021036   47309 main.go:141] libmachine: (no-preload-934450) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:32.021357   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetConfigRaw
	I0626 20:46:32.022056   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.024943   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025390   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.025426   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.025663   47309 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/config.json ...
	I0626 20:46:32.025851   47309 machine.go:88] provisioning docker machine ...
	I0626 20:46:32.025868   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.026092   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026257   47309 buildroot.go:166] provisioning hostname "no-preload-934450"
	I0626 20:46:32.026280   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.026450   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.028213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028583   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.028618   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.028699   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.028869   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029019   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.029154   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.029415   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.029867   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.029887   47309 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-934450 && echo "no-preload-934450" | sudo tee /etc/hostname
	I0626 20:46:32.150597   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-934450
	
	I0626 20:46:32.150629   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.153096   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153441   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.153486   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.153576   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.153773   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.153984   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.154125   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.154288   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.154697   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.154723   47309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-934450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-934450/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-934450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:32.270792   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:32.270827   47309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:32.270890   47309 buildroot.go:174] setting up certificates
	I0626 20:46:32.270902   47309 provision.go:83] configureAuth start
	I0626 20:46:32.270922   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetMachineName
	I0626 20:46:32.271206   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:32.273824   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274189   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.274213   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.274310   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.276495   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.276896   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.276927   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.277062   47309 provision.go:138] copyHostCerts
	I0626 20:46:32.277118   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:32.277126   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:32.277188   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:32.277271   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:32.277278   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:32.277300   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:32.277351   47309 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:32.277357   47309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:32.277393   47309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:32.277450   47309 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.no-preload-934450 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube no-preload-934450]
	I0626 20:46:32.417361   47309 provision.go:172] copyRemoteCerts
	I0626 20:46:32.417430   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:32.417452   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.419946   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420300   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.420331   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.420501   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.420703   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.420864   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.421017   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.501807   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 20:46:32.524284   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:32.546766   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0626 20:46:32.569677   47309 provision.go:86] duration metric: configureAuth took 298.742863ms
	I0626 20:46:32.569711   47309 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:32.569925   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:32.570026   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.572516   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.572864   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.572901   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.573011   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.573178   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573350   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.573492   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.573646   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.574084   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.574102   47309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:32.859482   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:32.859509   47309 machine.go:91] provisioned docker machine in 833.647496ms
	I0626 20:46:32.859519   47309 start.go:300] post-start starting for "no-preload-934450" (driver="kvm2")
	I0626 20:46:32.859527   47309 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:32.859543   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:32.859892   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:32.859942   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.862731   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863099   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.863131   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.863250   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.863434   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.863570   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.863698   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:32.946748   47309 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:32.951257   47309 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:32.951278   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:32.951351   47309 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:32.951436   47309 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:32.951516   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:32.959676   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:32.982687   47309 start.go:303] post-start completed in 123.154915ms
	I0626 20:46:32.982714   47309 fix.go:56] fixHost completed within 18.665325334s
	I0626 20:46:32.982763   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:32.985318   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985693   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:32.985725   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:32.985868   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:32.986072   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986226   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:32.986388   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:32.986547   47309 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:32.986951   47309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0626 20:46:32.986968   47309 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:33.094211   47309 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812393.043726278
	
	I0626 20:46:33.094239   47309 fix.go:206] guest clock: 1687812393.043726278
	I0626 20:46:33.094248   47309 fix.go:219] Guest: 2023-06-26 20:46:33.043726278 +0000 UTC Remote: 2023-06-26 20:46:32.98271893 +0000 UTC m=+186.399054274 (delta=61.007348ms)
	I0626 20:46:33.094272   47309 fix.go:190] guest clock delta is within tolerance: 61.007348ms
	I0626 20:46:33.094277   47309 start.go:83] releasing machines lock for "no-preload-934450", held for 18.776943332s
	I0626 20:46:33.094309   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.094577   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:33.097365   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097744   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.097775   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.097979   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098382   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098586   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:46:33.098661   47309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:33.098712   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.098797   47309 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:33.098816   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:46:33.101252   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101554   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101580   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101599   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.101719   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.101873   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.101951   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:33.101981   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:33.102007   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102160   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.102182   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:46:33.102316   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:46:33.102443   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:46:33.102551   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:46:33.210044   47309 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:33.215912   47309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:33.359955   47309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:33.366146   47309 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:33.366217   47309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:33.380504   47309 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:33.380526   47309 start.go:466] detecting cgroup driver to use...
	I0626 20:46:33.380579   47309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:33.393306   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:33.404983   47309 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:33.405038   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:33.418216   47309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:33.432337   47309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:33.531250   47309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:33.645556   47309 docker.go:212] disabling docker service ...
	I0626 20:46:33.645633   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:33.659515   47309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:33.671856   47309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:33.774921   47309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:33.883215   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:33.898847   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:33.917506   47309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:33.917580   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.928683   47309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:33.928743   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.939242   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.949833   47309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:33.960544   47309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:33.970988   47309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:33.979977   47309 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:33.980018   47309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:33.992692   47309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:34.001898   47309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:34.099514   47309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:34.265988   47309 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:34.266060   47309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:34.273678   47309 start.go:534] Will wait 60s for crictl version
	I0626 20:46:34.273739   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.277401   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:34.312548   47309 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:34.312630   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.360715   47309 ssh_runner.go:195] Run: crio --version
	I0626 20:46:34.413882   47309 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:34.415181   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetIP
	I0626 20:46:34.417841   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418166   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:46:34.418189   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:46:34.418410   47309 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:34.422651   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:34.434668   47309 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:34.434717   47309 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:34.465589   47309 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:34.465614   47309 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:46:34.465690   47309 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.465708   47309 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.465738   47309 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.465754   47309 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.465788   47309 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.465828   47309 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.465693   47309 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.465936   47309 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.467120   47309 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0626 20:46:34.467039   47309 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.467219   47309 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.467247   47309 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.467295   47309 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.467306   47309 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.467250   47309 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.636874   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.655059   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.683826   47309 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0626 20:46:34.683861   47309 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.683928   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.702952   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0626 20:46:34.703028   47309 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0626 20:46:34.703071   47309 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.703103   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.741790   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0626 20:46:34.741897   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0626 20:46:34.742006   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.746779   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.749151   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0626 20:46:34.759216   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.760925   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.763727   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.802768   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0626 20:46:34.802855   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0626 20:46:34.802879   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802936   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0626 20:46:34.802879   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:34.875629   47309 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0626 20:46:34.875683   47309 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:34.875741   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976009   47309 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0626 20:46:34.976048   47309 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:34.976082   47309 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0626 20:46:34.976100   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976116   47309 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:34.976117   47309 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0626 20:46:34.976143   47309 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:34.976156   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:34.976179   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:35.433285   47309 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:34.379704   47605 main.go:141] libmachine: (embed-certs-299839) Waiting to get IP...
	I0626 20:46:34.380770   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.381274   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.381362   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.381264   48187 retry.go:31] will retry after 291.849421ms: waiting for machine to come up
	I0626 20:46:34.674760   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.675247   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.675276   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.675192   48187 retry.go:31] will retry after 276.057593ms: waiting for machine to come up
	I0626 20:46:34.952573   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:34.953045   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:34.953077   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:34.953003   48187 retry.go:31] will retry after 360.478931ms: waiting for machine to come up
	I0626 20:46:35.315537   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.316036   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.316057   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.315988   48187 retry.go:31] will retry after 582.62072ms: waiting for machine to come up
	I0626 20:46:35.899816   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:35.900171   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:35.900232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:35.900154   48187 retry.go:31] will retry after 502.843212ms: waiting for machine to come up
	I0626 20:46:36.404792   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:36.405188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:36.405222   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:36.405134   48187 retry.go:31] will retry after 594.811848ms: waiting for machine to come up
	I0626 20:46:37.001827   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:37.002238   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:37.002264   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:37.002182   48187 retry.go:31] will retry after 1.067889284s: waiting for machine to come up
	I0626 20:46:38.071685   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:38.072135   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:38.072158   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:38.072094   48187 retry.go:31] will retry after 1.189834776s: waiting for machine to come up
	I0626 20:46:36.844137   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (2.041169028s)
	I0626 20:46:36.844171   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0626 20:46:36.844205   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.041210189s)
	I0626 20:46:36.844232   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0626 20:46:36.844245   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844257   47309 ssh_runner.go:235] Completed: which crictl: (1.868146562s)
	I0626 20:46:36.844293   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0626 20:46:36.844300   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0626 20:46:36.844234   47309 ssh_runner.go:235] Completed: which crictl: (1.968483663s)
	I0626 20:46:36.844349   47309 ssh_runner.go:235] Completed: which crictl: (1.868154335s)
	I0626 20:46:36.844364   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0626 20:46:36.844380   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0626 20:46:36.844405   47309 ssh_runner.go:235] Completed: which crictl: (1.868235538s)
	I0626 20:46:36.844428   47309 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.411115015s)
	I0626 20:46:36.844448   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0626 20:46:36.844455   47309 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0626 20:46:36.844488   47309 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:36.844513   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:46:39.895683   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (3.051359255s)
	I0626 20:46:39.895720   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0626 20:46:39.895808   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0: (3.051484848s)
	I0626 20:46:39.895824   47309 ssh_runner.go:235] Completed: which crictl: (3.051289954s)
	I0626 20:46:39.895855   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0626 20:46:39.895873   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1: (3.051494383s)
	I0626 20:46:39.895888   47309 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:46:39.895908   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0626 20:46:39.895950   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:39.895909   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3: (3.051516174s)
	I0626 20:46:39.895990   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:39.896000   47309 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3: (3.051535924s)
	I0626 20:46:39.896033   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0626 20:46:39.896034   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0626 20:46:39.896089   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.896102   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901778   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0626 20:46:39.901797   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.901830   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0626 20:46:39.911439   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0626 20:46:39.911477   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0626 20:46:39.911517   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0626 20:46:39.943818   47309 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0626 20:46:39.943947   47309 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:41.278134   47309 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.334156546s)
	I0626 20:46:41.278173   47309 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0626 20:46:41.278135   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (1.376281957s)
	I0626 20:46:41.278187   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0626 20:46:41.278207   47309 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:41.278256   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0626 20:46:39.263991   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:39.264402   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:39.264433   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:39.264371   48187 retry.go:31] will retry after 1.805262511s: waiting for machine to come up
	I0626 20:46:41.071232   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:41.071707   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:41.071731   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:41.071662   48187 retry.go:31] will retry after 1.945519102s: waiting for machine to come up
	I0626 20:46:43.018581   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:43.019039   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:43.019075   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:43.018983   48187 retry.go:31] will retry after 2.83662877s: waiting for machine to come up
	I0626 20:46:43.745408   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.467115523s)
	I0626 20:46:43.745443   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0626 20:46:43.745479   47309 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:43.745551   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0626 20:46:45.011214   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.26563338s)
	I0626 20:46:45.011266   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0626 20:46:45.011296   47309 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.011349   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0626 20:46:45.858520   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:45.858992   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:45.859026   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:45.858941   48187 retry.go:31] will retry after 2.332305212s: waiting for machine to come up
	I0626 20:46:48.193085   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:48.193594   47605 main.go:141] libmachine: (embed-certs-299839) DBG | unable to find current IP address of domain embed-certs-299839 in network mk-embed-certs-299839
	I0626 20:46:48.193625   47605 main.go:141] libmachine: (embed-certs-299839) DBG | I0626 20:46:48.193543   48187 retry.go:31] will retry after 2.846333425s: waiting for machine to come up
	I0626 20:46:52.634333   47779 start.go:369] acquired machines lock for "default-k8s-diff-port-473235" in 2m17.310683576s
	I0626 20:46:52.634385   47779 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:46:52.634413   47779 fix.go:54] fixHost starting: 
	I0626 20:46:52.634850   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:46:52.634890   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:46:52.654153   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0626 20:46:52.654638   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:46:52.655306   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:46:52.655337   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:46:52.655747   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:46:52.655952   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:46:52.656158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:46:52.657823   47779 fix.go:102] recreateIfNeeded on default-k8s-diff-port-473235: state=Stopped err=<nil>
	I0626 20:46:52.657850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	W0626 20:46:52.657997   47779 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:46:52.659722   47779 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-473235" ...
	I0626 20:46:51.043526   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044005   47605 main.go:141] libmachine: (embed-certs-299839) Found IP for machine: 192.168.39.51
	I0626 20:46:51.044034   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has current primary IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.044045   47605 main.go:141] libmachine: (embed-certs-299839) Reserving static IP address...
	I0626 20:46:51.044351   47605 main.go:141] libmachine: (embed-certs-299839) Reserved static IP address: 192.168.39.51
	I0626 20:46:51.044368   47605 main.go:141] libmachine: (embed-certs-299839) Waiting for SSH to be available...
	I0626 20:46:51.044405   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.044439   47605 main.go:141] libmachine: (embed-certs-299839) DBG | skip adding static IP to network mk-embed-certs-299839 - found existing host DHCP lease matching {name: "embed-certs-299839", mac: "52:54:00:d6:e6:45", ip: "192.168.39.51"}
	I0626 20:46:51.044456   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Getting to WaitForSSH function...
	I0626 20:46:51.046694   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047088   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.047121   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.047312   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH client type: external
	I0626 20:46:51.047348   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa (-rw-------)
	I0626 20:46:51.047392   47605 main.go:141] libmachine: (embed-certs-299839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:46:51.047414   47605 main.go:141] libmachine: (embed-certs-299839) DBG | About to run SSH command:
	I0626 20:46:51.047432   47605 main.go:141] libmachine: (embed-certs-299839) DBG | exit 0
	I0626 20:46:51.137058   47605 main.go:141] libmachine: (embed-certs-299839) DBG | SSH cmd err, output: <nil>: 
	I0626 20:46:51.137408   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetConfigRaw
	I0626 20:46:51.197444   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.199920   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200306   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.200339   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.200574   47605 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/config.json ...
	I0626 20:46:51.267260   47605 machine.go:88] provisioning docker machine ...
	I0626 20:46:51.267304   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:51.267709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.267921   47605 buildroot.go:166] provisioning hostname "embed-certs-299839"
	I0626 20:46:51.267943   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.268086   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.270429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270762   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.270790   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.270886   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.271060   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271200   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.271308   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.271475   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.271933   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.271950   47605 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-299839 && echo "embed-certs-299839" | sudo tee /etc/hostname
	I0626 20:46:51.403584   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-299839
	
	I0626 20:46:51.403622   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.406552   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.406876   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.406904   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.407053   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.407335   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407530   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.407716   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.407883   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:51.408280   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:51.408300   47605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-299839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-299839/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-299839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:46:51.534666   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:46:51.534702   47605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:46:51.534745   47605 buildroot.go:174] setting up certificates
	I0626 20:46:51.534753   47605 provision.go:83] configureAuth start
	I0626 20:46:51.534766   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetMachineName
	I0626 20:46:51.535047   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:51.537753   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538113   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.538141   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.538253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.540471   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.540890   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.540922   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.541015   47605 provision.go:138] copyHostCerts
	I0626 20:46:51.541089   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:46:51.541099   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:46:51.541155   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:46:51.541237   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:46:51.541246   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:46:51.541277   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:46:51.541333   47605 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:46:51.541339   47605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:46:51.541357   47605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:46:51.541434   47605 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-299839 san=[192.168.39.51 192.168.39.51 localhost 127.0.0.1 minikube embed-certs-299839]
	I0626 20:46:51.873317   47605 provision.go:172] copyRemoteCerts
	I0626 20:46:51.873396   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:46:51.873427   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:51.876293   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876659   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:51.876696   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:51.876889   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:51.877100   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:51.877262   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:51.877430   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:51.970189   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:46:51.993067   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:46:52.015607   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0626 20:46:52.037359   47605 provision.go:86] duration metric: configureAuth took 502.581033ms
	I0626 20:46:52.037401   47605 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:46:52.037623   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:46:52.037714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.040949   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041429   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.041486   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.041642   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.041859   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042061   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.042235   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.042398   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.042916   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.042936   47605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:46:52.366045   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:46:52.366072   47605 machine.go:91] provisioned docker machine in 1.098783864s
	I0626 20:46:52.366083   47605 start.go:300] post-start starting for "embed-certs-299839" (driver="kvm2")
	I0626 20:46:52.366112   47605 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:46:52.366134   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.366443   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:46:52.366472   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.369138   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369570   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.369630   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.369781   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.369957   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.370131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.370278   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.467055   47605 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:46:52.471203   47605 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:46:52.471222   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:46:52.471288   47605 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:46:52.471394   47605 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:46:52.471510   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:46:52.484668   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.510268   47605 start.go:303] post-start completed in 144.162745ms
	I0626 20:46:52.510292   47605 fix.go:56] fixHost completed within 19.415851972s
	I0626 20:46:52.510315   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.513188   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513629   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.513662   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.513848   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.514062   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514228   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.514415   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.514569   47605 main.go:141] libmachine: Using SSH client type: native
	I0626 20:46:52.514968   47605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0626 20:46:52.514983   47605 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:46:52.634177   47605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812412.582368193
	
	I0626 20:46:52.634199   47605 fix.go:206] guest clock: 1687812412.582368193
	I0626 20:46:52.634209   47605 fix.go:219] Guest: 2023-06-26 20:46:52.582368193 +0000 UTC Remote: 2023-06-26 20:46:52.510296584 +0000 UTC m=+163.430129249 (delta=72.071609ms)
	I0626 20:46:52.634237   47605 fix.go:190] guest clock delta is within tolerance: 72.071609ms
	I0626 20:46:52.634242   47605 start.go:83] releasing machines lock for "embed-certs-299839", held for 19.539848437s
	I0626 20:46:52.634277   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.634623   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:52.637732   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638182   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.638220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.638476   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639040   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639223   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:46:52.639307   47605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:46:52.639346   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.639490   47605 ssh_runner.go:195] Run: cat /version.json
	I0626 20:46:52.639517   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:46:52.642288   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642923   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.642968   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643016   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643351   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643492   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:52.643528   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:52.643564   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643763   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.643778   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:46:52.643973   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:46:52.643991   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.644109   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:46:52.644240   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:46:52.761230   47605 ssh_runner.go:195] Run: systemctl --version
	I0626 20:46:52.766865   47605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:46:52.919883   47605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:46:52.927218   47605 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:46:52.927290   47605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:46:52.948916   47605 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:46:52.948983   47605 start.go:466] detecting cgroup driver to use...
	I0626 20:46:52.949043   47605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:46:52.968673   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:46:52.982360   47605 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:46:52.982416   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:46:52.996984   47605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:46:53.015021   47605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:46:53.116692   47605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:46:53.251017   47605 docker.go:212] disabling docker service ...
	I0626 20:46:53.251096   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:46:53.268097   47605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:46:53.282223   47605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:46:53.412477   47605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:46:53.528110   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:46:53.541392   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:46:53.558736   47605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:46:53.558809   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.568482   47605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:46:53.568553   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.578178   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.587728   47605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:46:53.597231   47605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:46:53.606954   47605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:46:53.615250   47605 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:46:53.615308   47605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:46:53.628161   47605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:46:53.636477   47605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:46:53.755919   47605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:46:53.928744   47605 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:46:53.928823   47605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:46:53.934088   47605 start.go:534] Will wait 60s for crictl version
	I0626 20:46:53.934152   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:46:53.939345   47605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:46:53.971679   47605 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:46:53.971781   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.013494   47605 ssh_runner.go:195] Run: crio --version
	I0626 20:46:54.062724   47605 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:46:54.064536   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetIP
	I0626 20:46:54.067854   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068220   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:46:54.068254   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:46:54.068535   47605 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0626 20:46:54.072971   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:54.085981   47605 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:46:54.086048   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:52.661170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Start
	I0626 20:46:52.661331   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring networks are active...
	I0626 20:46:52.662042   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network default is active
	I0626 20:46:52.662444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Ensuring network mk-default-k8s-diff-port-473235 is active
	I0626 20:46:52.663218   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Getting domain xml...
	I0626 20:46:52.663876   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Creating domain...
	I0626 20:46:53.987148   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting to get IP...
	I0626 20:46:53.988282   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988739   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:53.988832   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:53.988735   48355 retry.go:31] will retry after 271.192351ms: waiting for machine to come up
	I0626 20:46:54.261343   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261825   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.261857   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.261773   48355 retry.go:31] will retry after 362.262293ms: waiting for machine to come up
	I0626 20:46:54.625453   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625951   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.625978   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.625859   48355 retry.go:31] will retry after 311.337455ms: waiting for machine to come up
	I0626 20:46:54.938519   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939023   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:54.939053   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:54.938972   48355 retry.go:31] will retry after 446.154442ms: waiting for machine to come up
	I0626 20:46:52.039929   47309 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.0285527s)
	I0626 20:46:52.039951   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0626 20:46:52.039974   47309 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.040015   47309 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0626 20:46:52.786422   47309 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0626 20:46:52.786468   47309 cache_images.go:123] Successfully loaded all cached images
	I0626 20:46:52.786474   47309 cache_images.go:92] LoadImages completed in 18.320847233s
	I0626 20:46:52.786562   47309 ssh_runner.go:195] Run: crio config
	I0626 20:46:52.857805   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:46:52.857833   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:52.857849   47309 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:52.857871   47309 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-934450 NodeName:no-preload-934450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:52.858035   47309 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-934450"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:52.858115   47309 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-934450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:52.858172   47309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:52.867179   47309 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:52.867253   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:52.875412   47309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0626 20:46:52.891376   47309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:52.906859   47309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0626 20:46:52.924927   47309 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:52.929059   47309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:52.942789   47309 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450 for IP: 192.168.50.38
	I0626 20:46:52.942825   47309 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:52.943011   47309 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:52.943059   47309 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:52.943138   47309 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.key
	I0626 20:46:52.943195   47309 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key.01da567d
	I0626 20:46:52.943236   47309 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key
	I0626 20:46:52.943341   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:52.943376   47309 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:52.943396   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:52.943435   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:52.943472   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:52.943509   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:52.943551   47309 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:52.944147   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:52.971630   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:52.997892   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:53.024951   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 20:46:53.048462   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:53.075077   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:53.100318   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:53.129545   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:53.162187   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:53.191304   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:53.216166   47309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:53.240182   47309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:53.256447   47309 ssh_runner.go:195] Run: openssl version
	I0626 20:46:53.262053   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:53.272163   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277028   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.277084   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:53.282611   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:53.296039   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:53.306923   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312778   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.312825   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:53.320244   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:53.334066   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:53.347662   47309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353665   47309 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.353725   47309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:53.361150   47309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:53.374846   47309 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:53.380462   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:53.387949   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:53.393690   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:53.399208   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:53.405073   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:53.411265   47309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:53.417798   47309 kubeadm.go:404] StartCluster: {Name:no-preload-934450 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:no-preload-934450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiN
odeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:53.417916   47309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:53.417950   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:53.451231   47309 cri.go:89] found id: ""
	I0626 20:46:53.451307   47309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:53.460716   47309 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:53.460737   47309 kubeadm.go:636] restartCluster start
	I0626 20:46:53.460790   47309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:53.470518   47309 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.471961   47309 kubeconfig.go:92] found "no-preload-934450" server: "https://192.168.50.38:8443"
	I0626 20:46:53.475433   47309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:53.484054   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.484108   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:53.497348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:53.998070   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:53.998129   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.010119   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.498134   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.498223   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:54.512223   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.997432   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:54.997520   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.015317   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.497435   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.497516   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:55.512591   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:55.998180   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:55.998251   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.013135   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:56.497483   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.497573   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:56.512714   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:54.116295   47605 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:46:54.116360   47605 ssh_runner.go:195] Run: which lz4
	I0626 20:46:54.120344   47605 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:46:54.124462   47605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:46:54.124490   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:46:55.959041   47605 crio.go:444] Took 1.838722 seconds to copy over tarball
	I0626 20:46:55.959115   47605 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:46:59.019532   47605 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060382374s)
	I0626 20:46:59.019555   47605 crio.go:451] Took 3.060486 seconds to extract the tarball
	I0626 20:46:59.019562   47605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:46:59.058687   47605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:46:59.102812   47605 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:46:59.102833   47605 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:46:59.102896   47605 ssh_runner.go:195] Run: crio config
	I0626 20:46:55.386479   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.386986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:55.387014   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:55.386901   48355 retry.go:31] will retry after 710.798834ms: waiting for machine to come up
	I0626 20:46:56.099580   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100079   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:56.100112   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:56.100023   48355 retry.go:31] will retry after 921.187154ms: waiting for machine to come up
	I0626 20:46:57.022481   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022914   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.022944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.022859   48355 retry.go:31] will retry after 914.232442ms: waiting for machine to come up
	I0626 20:46:57.938375   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938823   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:57.938845   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:57.938807   48355 retry.go:31] will retry after 1.411011331s: waiting for machine to come up
	I0626 20:46:59.351697   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352133   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:46:59.352169   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:46:59.352076   48355 retry.go:31] will retry after 1.830031795s: waiting for machine to come up
	I0626 20:46:56.997450   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:56.997518   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.009310   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.497847   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.497929   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:57.513061   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:57.997474   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:57.997553   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.012610   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.498200   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.498274   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:58.513410   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:58.997938   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:58.998022   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.013357   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.497503   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.497581   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.514354   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.997445   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.997531   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.008894   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.497471   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.497555   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.508635   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.998326   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.998429   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.009836   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.498479   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.498593   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.510348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.159206   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:46:59.159236   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:46:59.159252   47605 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:46:59.159286   47605 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.51 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-299839 NodeName:embed-certs-299839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:46:59.159423   47605 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-299839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:46:59.159484   47605 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-299839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:46:59.159540   47605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:46:59.168802   47605 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:46:59.168882   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:46:59.177994   47605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0626 20:46:59.196041   47605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:46:59.214092   47605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0626 20:46:59.235187   47605 ssh_runner.go:195] Run: grep 192.168.39.51	control-plane.minikube.internal$ /etc/hosts
	I0626 20:46:59.239440   47605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:46:59.251723   47605 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839 for IP: 192.168.39.51
	I0626 20:46:59.251772   47605 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:46:59.251943   47605 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:46:59.252017   47605 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:46:59.252134   47605 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/client.key
	I0626 20:46:59.252381   47605 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key.be9c3c95
	I0626 20:46:59.252482   47605 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key
	I0626 20:46:59.252626   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:46:59.252667   47605 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:46:59.252682   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:46:59.252718   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:46:59.252748   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:46:59.252805   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:46:59.252868   47605 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:46:59.253555   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:46:59.280222   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:46:59.306244   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:46:59.331876   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/embed-certs-299839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:46:59.358710   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:46:59.385239   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:46:59.408963   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:46:59.433684   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:46:59.457235   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:46:59.480565   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:46:59.507918   47605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:46:59.532762   47605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:46:59.551283   47605 ssh_runner.go:195] Run: openssl version
	I0626 20:46:59.557079   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:46:59.568335   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573129   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.573187   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:46:59.579116   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:46:59.589952   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:46:59.600935   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605668   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.605735   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:46:59.611234   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:46:59.622615   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:46:59.633737   47605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638884   47605 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.638962   47605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:46:59.644559   47605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:46:59.655653   47605 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:46:59.660632   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:46:59.666672   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:46:59.672628   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:46:59.679194   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:46:59.685197   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:46:59.691190   47605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:46:59.697063   47605 kubeadm.go:404] StartCluster: {Name:embed-certs-299839 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:embed-certs-299839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:46:59.697146   47605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:46:59.697191   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:46:59.731197   47605 cri.go:89] found id: ""
	I0626 20:46:59.731256   47605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:46:59.741949   47605 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:46:59.741968   47605 kubeadm.go:636] restartCluster start
	I0626 20:46:59.742023   47605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:46:59.751837   47605 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:46:59.753347   47605 kubeconfig.go:92] found "embed-certs-299839" server: "https://192.168.39.51:8443"
	I0626 20:46:59.756955   47605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:46:59.766951   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:46:59.767023   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:46:59.779343   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.280064   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.280149   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.293730   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:00.780264   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:00.780347   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:00.793352   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.279827   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.279911   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.292843   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.779409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.779513   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:01.793293   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.279814   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.279902   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.296345   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.779892   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.779980   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.796346   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.280342   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.280417   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.292883   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.780156   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:03.780232   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.792667   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:01.184295   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184668   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:01.184694   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:01.184605   48355 retry.go:31] will retry after 2.248796967s: waiting for machine to come up
	I0626 20:47:03.435559   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436054   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:03.436086   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:03.435982   48355 retry.go:31] will retry after 2.012102985s: waiting for machine to come up
	I0626 20:47:01.998275   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:01.998353   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.014217   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.497731   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.497824   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:02.509505   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:02.998119   47309 api_server.go:166] Checking apiserver status ...
	I0626 20:47:02.998202   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:03.009348   47309 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:03.485111   47309 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:03.485154   47309 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:03.485167   47309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:03.485216   47309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:03.516791   47309 cri.go:89] found id: ""
	I0626 20:47:03.516868   47309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:03.531523   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:03.540694   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:03.540761   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549498   47309 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:03.549525   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:03.687202   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.779117   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.091878038s)
	I0626 20:47:04.779156   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:04.983470   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.059963   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:05.136199   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:05.136282   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:05.663265   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:06.163057   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:04.280330   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.280447   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.292565   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:04.780127   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:04.780225   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:04.797554   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.279900   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.279986   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.297853   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.779501   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:05.779594   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:05.794314   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.279916   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.280001   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.296829   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:06.779473   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:06.779566   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:06.793302   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.279802   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.279888   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.292407   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:07.779813   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:07.779914   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:07.793591   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.279846   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.279935   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.292196   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:08.779753   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:08.779859   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:08.792362   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:05.450681   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451186   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:05.451216   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:05.451117   48355 retry.go:31] will retry after 3.442192384s: waiting for machine to come up
	I0626 20:47:08.895024   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | unable to find current IP address of domain default-k8s-diff-port-473235 in network mk-default-k8s-diff-port-473235
	I0626 20:47:08.895595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | I0626 20:47:08.895520   48355 retry.go:31] will retry after 4.272351839s: waiting for machine to come up
	I0626 20:47:06.662926   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.163275   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.662871   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:07.689321   47309 api_server.go:72] duration metric: took 2.55312002s to wait for apiserver process to appear ...
	I0626 20:47:07.689348   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:07.689366   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:10.879412   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:10.879439   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:11.379823   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.386705   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.386736   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:11.880574   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:11.892733   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:11.892768   47309 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:12.380392   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:47:12.389894   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:47:12.400274   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:12.400307   47309 api_server.go:131] duration metric: took 4.710951407s to wait for apiserver health ...
	I0626 20:47:12.400320   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:47:12.400332   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:12.402355   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:09.280409   47605 api_server.go:166] Checking apiserver status ...
	I0626 20:47:09.280512   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:09.293009   47605 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:09.767593   47605 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:09.767636   47605 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:09.767648   47605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:09.767705   47605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:09.800380   47605 cri.go:89] found id: ""
	I0626 20:47:09.800465   47605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:09.819239   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:09.830482   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:09.830547   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840424   47605 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:09.840451   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:09.979898   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.746785   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:10.960847   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.041569   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:11.122238   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:11.122322   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:11.640034   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.140386   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:12.640370   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.139901   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.639546   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:13.663848   47605 api_server.go:72] duration metric: took 2.54160148s to wait for apiserver process to appear ...
	I0626 20:47:13.663874   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:13.663905   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:14.587552   46683 start.go:369] acquired machines lock for "old-k8s-version-490377" in 55.268521785s
	I0626 20:47:14.587610   46683 start.go:96] Skipping create...Using existing machine configuration
	I0626 20:47:14.587622   46683 fix.go:54] fixHost starting: 
	I0626 20:47:14.588035   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:47:14.588074   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:47:14.607186   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0626 20:47:14.607765   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:47:14.608361   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:47:14.608384   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:47:14.608697   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:47:14.608908   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:14.609056   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:47:14.610765   46683 fix.go:102] recreateIfNeeded on old-k8s-version-490377: state=Stopped err=<nil>
	I0626 20:47:14.610791   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	W0626 20:47:14.611905   46683 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 20:47:14.613885   46683 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490377" ...
	I0626 20:47:13.169996   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.170568   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Found IP for machine: 192.168.61.238
	I0626 20:47:13.170601   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserving static IP address...
	I0626 20:47:13.170622   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has current primary IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.171048   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.171080   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Reserved static IP address: 192.168.61.238
	I0626 20:47:13.171107   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | skip adding static IP to network mk-default-k8s-diff-port-473235 - found existing host DHCP lease matching {name: "default-k8s-diff-port-473235", mac: "52:54:00:89:62:a8", ip: "192.168.61.238"}
	I0626 20:47:13.171128   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Getting to WaitForSSH function...
	I0626 20:47:13.171141   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Waiting for SSH to be available...
	I0626 20:47:13.173755   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174235   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.174265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.174442   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH client type: external
	I0626 20:47:13.174485   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa (-rw-------)
	I0626 20:47:13.174518   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:13.174538   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | About to run SSH command:
	I0626 20:47:13.174553   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | exit 0
	I0626 20:47:13.265799   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:13.266189   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetConfigRaw
	I0626 20:47:13.266850   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.269749   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270212   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.270253   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.270498   47779 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/config.json ...
	I0626 20:47:13.270732   47779 machine.go:88] provisioning docker machine ...
	I0626 20:47:13.270758   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:13.270959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271112   47779 buildroot.go:166] provisioning hostname "default-k8s-diff-port-473235"
	I0626 20:47:13.271134   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.271250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.273679   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274087   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.274135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.274273   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.274446   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274618   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.274747   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.274940   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.275353   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.275369   47779 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-473235 && echo "default-k8s-diff-port-473235" | sudo tee /etc/hostname
	I0626 20:47:13.416565   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-473235
	
	I0626 20:47:13.416595   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.420132   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420596   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.420670   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.420944   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.421172   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421392   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.421571   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.421821   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.422425   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.422457   47779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-473235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-473235/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-473235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:13.566095   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:13.566131   47779 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:13.566175   47779 buildroot.go:174] setting up certificates
	I0626 20:47:13.566192   47779 provision.go:83] configureAuth start
	I0626 20:47:13.566206   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetMachineName
	I0626 20:47:13.566509   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:13.569795   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570251   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.570283   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.570476   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.573020   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573439   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.573475   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.573704   47779 provision.go:138] copyHostCerts
	I0626 20:47:13.573782   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:13.573795   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:13.573859   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:13.573976   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:13.573987   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:13.574016   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:13.574094   47779 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:13.574108   47779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:13.574134   47779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:13.574199   47779 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-473235 san=[192.168.61.238 192.168.61.238 localhost 127.0.0.1 minikube default-k8s-diff-port-473235]
	I0626 20:47:13.795155   47779 provision.go:172] copyRemoteCerts
	I0626 20:47:13.795207   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:13.795230   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.798039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798457   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.798512   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.798706   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.798918   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.799130   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.799274   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:13.892185   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:13.921840   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 20:47:13.951311   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:13.980185   47779 provision.go:86] duration metric: configureAuth took 413.976937ms
	I0626 20:47:13.980216   47779 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:13.980460   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:47:13.980551   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:13.983814   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984217   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:13.984265   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:13.984604   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:13.984826   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985010   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:13.985144   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:13.985344   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:13.985947   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:13.985979   47779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:14.317679   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:14.317702   47779 machine.go:91] provisioned docker machine in 1.046953094s
	I0626 20:47:14.317713   47779 start.go:300] post-start starting for "default-k8s-diff-port-473235" (driver="kvm2")
	I0626 20:47:14.317723   47779 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:14.317744   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.318064   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:14.318101   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.321001   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321358   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.321408   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.321598   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.321806   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.321986   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.322139   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.414722   47779 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:14.419797   47779 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:14.419822   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:14.419895   47779 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:14.419990   47779 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:14.420118   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:14.430766   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:14.458086   47779 start.go:303] post-start completed in 140.355388ms
	I0626 20:47:14.458107   47779 fix.go:56] fixHost completed within 21.823695632s
	I0626 20:47:14.458125   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.460953   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461277   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.461308   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.461472   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.461651   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.461841   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.462025   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.462175   47779 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:14.462805   47779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.61.238 22 <nil> <nil>}
	I0626 20:47:14.462823   47779 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:14.587374   47779 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812434.534091475
	
	I0626 20:47:14.587395   47779 fix.go:206] guest clock: 1687812434.534091475
	I0626 20:47:14.587403   47779 fix.go:219] Guest: 2023-06-26 20:47:14.534091475 +0000 UTC Remote: 2023-06-26 20:47:14.458110543 +0000 UTC m=+159.266861615 (delta=75.980932ms)
	I0626 20:47:14.587446   47779 fix.go:190] guest clock delta is within tolerance: 75.980932ms
	I0626 20:47:14.587456   47779 start.go:83] releasing machines lock for "default-k8s-diff-port-473235", held for 21.953095935s
	I0626 20:47:14.587492   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.587776   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:14.590654   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591111   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.591143   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.591332   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.591869   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592074   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:47:14.592151   47779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:14.592205   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.592451   47779 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:14.592489   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:47:14.595039   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595271   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595585   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595615   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595659   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:14.595698   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:14.595901   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596076   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:47:14.596118   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:47:14.596311   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596344   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:47:14.596466   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.596622   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:47:14.683637   47779 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:14.713738   47779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:14.869873   47779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:14.877719   47779 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:14.877815   47779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:14.893656   47779 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:14.893682   47779 start.go:466] detecting cgroup driver to use...
	I0626 20:47:14.893738   47779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:14.908419   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:14.921730   47779 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:14.921812   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:14.940659   47779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:14.955010   47779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:15.062849   47779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:15.193682   47779 docker.go:212] disabling docker service ...
	I0626 20:47:15.193810   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:15.210855   47779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:15.223362   47779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:15.348648   47779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:15.471398   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:15.496137   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:15.523967   47779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 20:47:15.524041   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.537188   47779 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:15.537258   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.550404   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.563577   47779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:15.574958   47779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:15.588685   47779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:15.600611   47779 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:15.600680   47779 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:15.615658   47779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:15.628004   47779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:15.763410   47779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:15.982719   47779 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:15.982799   47779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:15.990799   47779 start.go:534] Will wait 60s for crictl version
	I0626 20:47:15.990864   47779 ssh_runner.go:195] Run: which crictl
	I0626 20:47:15.997709   47779 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:16.041802   47779 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:16.041893   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.094989   47779 ssh_runner.go:195] Run: crio --version
	I0626 20:47:16.151324   47779 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0626 20:47:12.403841   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:12.420028   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:12.459593   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:12.486209   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:12.486256   47309 system_pods.go:61] "coredns-5d78c9869d-dwkng" [8919aa0b-b8b6-4672-aa75-ea5ea1d27ef6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:12.486270   47309 system_pods.go:61] "etcd-no-preload-934450" [67a1367b-dc99-4613-8a75-796a64f13f0b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:12.486281   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [7452cf79-3e8f-4dce-922a-a52115c7059f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:12.486291   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [a3393645-4d3d-4fab-a32f-c15ff3bfcdca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:12.486300   47309 system_pods.go:61] "kube-proxy-phrv2" [d08fdd52-cc2a-43cb-84c4-170ad241527e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:12.486310   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [cc1c89f8-925a-4847-b693-08fbc4905119] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:12.486319   47309 system_pods.go:61] "metrics-server-74d5c6b9c-7szm5" [d94c68f7-4521-4366-b5db-38f420a78dd2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:12.486331   47309 system_pods.go:61] "storage-provisioner" [7aa74f96-c306-4d70-a211-715b4877b15b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:12.486341   47309 system_pods.go:74] duration metric: took 26.722879ms to wait for pod list to return data ...
	I0626 20:47:12.486359   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:12.490745   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:12.490784   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:12.490809   47309 node_conditions.go:105] duration metric: took 4.437855ms to run NodePressure ...
	I0626 20:47:12.490830   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:12.794912   47309 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800827   47309 kubeadm.go:787] kubelet initialised
	I0626 20:47:12.800855   47309 kubeadm.go:788] duration metric: took 5.915334ms waiting for restarted kubelet to initialise ...
	I0626 20:47:12.800865   47309 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:12.807162   47309 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:14.822450   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:14.614985   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Start
	I0626 20:47:14.615159   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring networks are active...
	I0626 20:47:14.615866   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network default is active
	I0626 20:47:14.616331   46683 main.go:141] libmachine: (old-k8s-version-490377) Ensuring network mk-old-k8s-version-490377 is active
	I0626 20:47:14.616785   46683 main.go:141] libmachine: (old-k8s-version-490377) Getting domain xml...
	I0626 20:47:14.617507   46683 main.go:141] libmachine: (old-k8s-version-490377) Creating domain...
	I0626 20:47:16.055502   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting to get IP...
	I0626 20:47:16.056448   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.056913   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.057009   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.056935   48478 retry.go:31] will retry after 281.770624ms: waiting for machine to come up
	I0626 20:47:16.340685   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.341472   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.341496   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.341268   48478 retry.go:31] will retry after 249.185886ms: waiting for machine to come up
	I0626 20:47:16.591867   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.592547   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.592718   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.592671   48478 retry.go:31] will retry after 327.814159ms: waiting for machine to come up
	I0626 20:47:17.910025   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:17.910061   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:18.411167   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.425310   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.425345   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:18.910567   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:18.920897   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:18.920933   47605 api_server.go:103] status: https://192.168.39.51:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:19.410736   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:47:19.418228   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:47:19.428516   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:19.428551   47605 api_server.go:131] duration metric: took 5.764669652s to wait for apiserver health ...
	I0626 20:47:19.428561   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:47:19.428573   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:19.430711   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:16.152563   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetIP
	I0626 20:47:16.156250   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156617   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:47:16.156644   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:47:16.156894   47779 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:16.162480   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:16.180283   47779 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 20:47:16.180336   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:16.227399   47779 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0626 20:47:16.227474   47779 ssh_runner.go:195] Run: which lz4
	I0626 20:47:16.233720   47779 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:16.240423   47779 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:16.240463   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0626 20:47:18.263416   47779 crio.go:444] Took 2.029753 seconds to copy over tarball
	I0626 20:47:18.263515   47779 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:16.837607   47309 pod_ready.go:102] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:19.361799   47309 pod_ready.go:92] pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.361869   47309 pod_ready.go:81] duration metric: took 6.554677083s waiting for pod "coredns-5d78c9869d-dwkng" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.361886   47309 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370122   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:19.370145   47309 pod_ready.go:81] duration metric: took 8.249243ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.370157   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391052   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:21.391082   47309 pod_ready.go:81] duration metric: took 2.020917194s waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.391096   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:16.922381   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:16.922923   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:16.922952   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:16.922873   48478 retry.go:31] will retry after 486.21568ms: waiting for machine to come up
	I0626 20:47:17.410676   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:17.411282   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:17.411305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:17.411227   48478 retry.go:31] will retry after 606.277374ms: waiting for machine to come up
	I0626 20:47:18.020296   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.021367   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.021400   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.021287   48478 retry.go:31] will retry after 576.843487ms: waiting for machine to come up
	I0626 20:47:18.599674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:18.600326   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:18.600352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:18.600221   48478 retry.go:31] will retry after 857.329718ms: waiting for machine to come up
	I0626 20:47:19.459545   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:19.460101   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:19.460125   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:19.460050   48478 retry.go:31] will retry after 1.017747035s: waiting for machine to come up
	I0626 20:47:20.479538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:20.480140   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:20.480178   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:20.480043   48478 retry.go:31] will retry after 1.379789146s: waiting for machine to come up
	I0626 20:47:19.432325   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:19.461944   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:19.498519   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:19.512703   47605 system_pods.go:59] 9 kube-system pods found
	I0626 20:47:19.512831   47605 system_pods.go:61] "coredns-5d78c9869d-dz48f" [87a67e95-a071-4865-902b-0e401e852456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512860   47605 system_pods.go:61] "coredns-5d78c9869d-lbfsr" [adee7e6b-88b2-412e-bb2d-fc0939bca149] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:19.512905   47605 system_pods.go:61] "etcd-embed-certs-299839" [8aefd012-6a54-4e75-afc9-cc8385212eb3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:19.512937   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [e178b5e8-445c-444f-965e-051233c2fa44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:19.512971   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [e965e4af-a673-4b93-bb63-e7bfc0f9514d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:19.512995   47605 system_pods.go:61] "kube-proxy-q5khr" [6c11d667-3490-4417-8e0c-373fe25d06b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:19.513014   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [0385958c-3f22-4eb8-bdac-cbaeb52fe9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:19.513050   47605 system_pods.go:61] "metrics-server-74d5c6b9c-gb6b2" [b5a15d68-23ee-4274-a147-db6f2eef97e6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:19.513074   47605 system_pods.go:61] "storage-provisioner" [42bd8483-f594-4bf9-8c32-9688d1d99523] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:19.513093   47605 system_pods.go:74] duration metric: took 14.550735ms to wait for pod list to return data ...
	I0626 20:47:19.513125   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:19.519356   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:19.519455   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:19.519513   47605 node_conditions.go:105] duration metric: took 6.36764ms to run NodePressure ...
	I0626 20:47:19.519573   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:19.935407   47605 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943592   47605 kubeadm.go:787] kubelet initialised
	I0626 20:47:19.943622   47605 kubeadm.go:788] duration metric: took 8.187833ms waiting for restarted kubelet to initialise ...
	I0626 20:47:19.943633   47605 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:19.951319   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.957985   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958016   47605 pod_ready.go:81] duration metric: took 6.605612ms waiting for pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.958027   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-dz48f" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.958037   47605 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:19.965229   47605 pod_ready.go:97] node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965312   47605 pod_ready.go:81] duration metric: took 7.251456ms waiting for pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:19.965335   47605 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-299839" hosting pod "coredns-5d78c9869d-lbfsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-299839" has status "Ready":"False"
	I0626 20:47:19.965391   47605 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:22.010596   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:21.752755   47779 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.48920102s)
	I0626 20:47:21.752790   47779 crio.go:451] Took 3.489344 seconds to extract the tarball
	I0626 20:47:21.752802   47779 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:21.800026   47779 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:21.844486   47779 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 20:47:21.844504   47779 cache_images.go:84] Images are preloaded, skipping loading
	I0626 20:47:21.844573   47779 ssh_runner.go:195] Run: crio config
	I0626 20:47:21.924367   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:21.924397   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:21.924411   47779 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:21.924431   47779 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.238 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-473235 NodeName:default-k8s-diff-port-473235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 20:47:21.924593   47779 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-473235"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:21.924685   47779 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-473235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0626 20:47:21.924756   47779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 20:47:21.934851   47779 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:21.934951   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:21.944791   47779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0626 20:47:21.963087   47779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:21.981936   47779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0626 20:47:22.002207   47779 ssh_runner.go:195] Run: grep 192.168.61.238	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:22.006443   47779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:22.019555   47779 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235 for IP: 192.168.61.238
	I0626 20:47:22.019591   47779 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:22.019794   47779 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:22.019859   47779 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:22.019983   47779 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.key
	I0626 20:47:22.020069   47779 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key.761b3e7f
	I0626 20:47:22.020126   47779 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key
	I0626 20:47:22.020257   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:22.020296   47779 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:22.020309   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:22.020340   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:22.020376   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:22.020418   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:22.020475   47779 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:22.021354   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:22.045205   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:22.069269   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:22.092387   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:22.120395   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:22.143199   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:22.167864   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:22.192223   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:22.218085   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:22.243249   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:22.269200   47779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:22.294015   47779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:22.313139   47779 ssh_runner.go:195] Run: openssl version
	I0626 20:47:22.319998   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:22.330864   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337082   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.337144   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:22.343158   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:22.354507   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:22.366438   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371070   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.371127   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:22.376858   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:22.387928   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:22.398665   47779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403091   47779 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.403139   47779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:22.410314   47779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:22.421729   47779 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:22.426373   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:22.432450   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:22.438093   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:22.446065   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:22.452103   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:22.457940   47779 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:22.464492   47779 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-473235 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.27.3 ClusterName:default-k8s-diff-port-473235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:22.464647   47779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:22.464707   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:22.497723   47779 cri.go:89] found id: ""
	I0626 20:47:22.497803   47779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:22.508914   47779 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:22.508940   47779 kubeadm.go:636] restartCluster start
	I0626 20:47:22.508994   47779 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:22.519855   47779 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:22.521400   47779 kubeconfig.go:92] found "default-k8s-diff-port-473235" server: "https://192.168.61.238:8444"
	I0626 20:47:22.525126   47779 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:22.536252   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:22.536311   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:22.548698   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.049731   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.049805   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.062575   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.548966   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:23.549050   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:23.566351   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.048839   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.048917   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.065016   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:24.549110   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:24.549211   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:24.563150   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:25.049739   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.049828   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.066148   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:23.496598   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.496624   47309 pod_ready.go:81] duration metric: took 2.105519396s waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.496637   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504045   47309 pod_ready.go:92] pod "kube-proxy-phrv2" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:23.504067   47309 pod_ready.go:81] duration metric: took 7.42294ms waiting for pod "kube-proxy-phrv2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:23.504078   47309 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022096   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:25.022123   47309 pod_ready.go:81] duration metric: took 1.518037516s waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.022135   47309 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:21.861798   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:21.981234   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:21.981272   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:21.862292   48478 retry.go:31] will retry after 2.138021733s: waiting for machine to come up
	I0626 20:47:24.002651   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:24.003184   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:24.003215   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:24.003122   48478 retry.go:31] will retry after 2.016131828s: waiting for machine to come up
	I0626 20:47:26.020987   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:26.021487   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:26.021511   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:26.021427   48478 retry.go:31] will retry after 2.317082546s: waiting for machine to come up
	I0626 20:47:24.497636   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:26.997525   47605 pod_ready.go:102] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:27.997348   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:27.997394   47605 pod_ready.go:81] duration metric: took 8.031967272s waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:27.997408   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:25.548979   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:25.549054   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:25.566040   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.049569   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.049636   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.061513   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:26.548864   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:26.548952   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:26.566095   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.049674   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.049818   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.067169   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.549748   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:27.549831   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:27.568977   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.048852   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.048921   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.064935   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:28.549510   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:28.549614   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:28.562781   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.049396   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.049482   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.063237   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:29.548762   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:29.548853   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:29.561289   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:30.048758   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.048832   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.061079   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:27.040010   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:29.536317   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.537367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:28.340238   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:28.340738   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:28.340774   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:28.340660   48478 retry.go:31] will retry after 3.9887538s: waiting for machine to come up
	I0626 20:47:30.014224   47605 pod_ready.go:102] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:31.016636   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.016660   47605 pod_ready.go:81] duration metric: took 3.019245103s waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.016669   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022769   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.022794   47605 pod_ready.go:81] duration metric: took 6.118745ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.022806   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.031975   47605 pod_ready.go:92] pod "kube-proxy-q5khr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.032004   47605 pod_ready.go:81] duration metric: took 9.189713ms waiting for pod "kube-proxy-q5khr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.032015   47605 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040203   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:31.040231   47605 pod_ready.go:81] duration metric: took 8.207477ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:31.040244   47605 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:33.054175   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:30.549812   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:30.549897   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:30.562540   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.049000   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.049071   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.061358   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:31.549602   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:31.549664   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:31.562690   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.049131   47779 api_server.go:166] Checking apiserver status ...
	I0626 20:47:32.049223   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:32.061951   47779 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:32.536775   47779 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:32.536827   47779 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:32.536843   47779 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:32.536914   47779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:32.571353   47779 cri.go:89] found id: ""
	I0626 20:47:32.571434   47779 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:32.588931   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:32.599519   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:32.599585   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610183   47779 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:32.610212   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:32.738386   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.418561   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.612946   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.740311   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:33.830927   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:33.830992   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.372343   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:34.872109   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:33.542864   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:36.037521   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:32.332668   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:32.333139   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | unable to find current IP address of domain old-k8s-version-490377 in network mk-old-k8s-version-490377
	I0626 20:47:32.333169   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | I0626 20:47:32.333084   48478 retry.go:31] will retry after 3.571549947s: waiting for machine to come up
	I0626 20:47:35.906478   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.906962   46683 main.go:141] libmachine: (old-k8s-version-490377) Found IP for machine: 192.168.72.111
	I0626 20:47:35.906994   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has current primary IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.907004   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserving static IP address...
	I0626 20:47:35.907527   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.907573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | skip adding static IP to network mk-old-k8s-version-490377 - found existing host DHCP lease matching {name: "old-k8s-version-490377", mac: "52:54:00:cc:27:8f", ip: "192.168.72.111"}
	I0626 20:47:35.907588   46683 main.go:141] libmachine: (old-k8s-version-490377) Reserved static IP address: 192.168.72.111
	I0626 20:47:35.907605   46683 main.go:141] libmachine: (old-k8s-version-490377) Waiting for SSH to be available...
	I0626 20:47:35.907658   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Getting to WaitForSSH function...
	I0626 20:47:35.909932   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910346   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:35.910383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:35.910538   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH client type: external
	I0626 20:47:35.910573   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Using SSH private key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa (-rw-------)
	I0626 20:47:35.910604   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0626 20:47:35.910620   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | About to run SSH command:
	I0626 20:47:35.910635   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | exit 0
	I0626 20:47:36.006056   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | SSH cmd err, output: <nil>: 
	I0626 20:47:36.006429   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetConfigRaw
	I0626 20:47:36.007160   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.010144   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010519   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.010551   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.010863   46683 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/config.json ...
	I0626 20:47:36.011106   46683 machine.go:88] provisioning docker machine ...
	I0626 20:47:36.011130   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.011366   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011542   46683 buildroot.go:166] provisioning hostname "old-k8s-version-490377"
	I0626 20:47:36.011561   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.011705   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.014236   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014643   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.014674   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.014821   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.015013   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015156   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.015371   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.015595   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.016010   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.016029   46683 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490377 && echo "old-k8s-version-490377" | sudo tee /etc/hostname
	I0626 20:47:36.160735   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490377
	
	I0626 20:47:36.160797   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.163857   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164373   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.164425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.164566   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.164778   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.164983   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.165128   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.165311   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.166001   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.166030   46683 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490377/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 20:47:36.302740   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 20:47:36.302789   46683 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16761-7242/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-7242/.minikube}
	I0626 20:47:36.302839   46683 buildroot.go:174] setting up certificates
	I0626 20:47:36.302852   46683 provision.go:83] configureAuth start
	I0626 20:47:36.302868   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetMachineName
	I0626 20:47:36.303151   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:36.305958   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306411   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.306439   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.306667   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.309069   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309447   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.309480   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.309538   46683 provision.go:138] copyHostCerts
	I0626 20:47:36.309622   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem, removing ...
	I0626 20:47:36.309635   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem
	I0626 20:47:36.309702   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/key.pem (1675 bytes)
	I0626 20:47:36.309813   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem, removing ...
	I0626 20:47:36.309830   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem
	I0626 20:47:36.309868   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/ca.pem (1078 bytes)
	I0626 20:47:36.309938   46683 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem, removing ...
	I0626 20:47:36.309947   46683 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem
	I0626 20:47:36.309970   46683 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-7242/.minikube/cert.pem (1123 bytes)
	I0626 20:47:36.310026   46683 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490377 san=[192.168.72.111 192.168.72.111 localhost 127.0.0.1 minikube old-k8s-version-490377]
	I0626 20:47:36.441131   46683 provision.go:172] copyRemoteCerts
	I0626 20:47:36.441183   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 20:47:36.441204   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.444557   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445034   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.445067   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.445311   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.445540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.445700   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.445857   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:36.542375   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0626 20:47:36.570185   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 20:47:36.596725   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 20:47:36.622954   46683 provision.go:86] duration metric: configureAuth took 320.087643ms
	I0626 20:47:36.622983   46683 buildroot.go:189] setting minikube options for container-runtime
	I0626 20:47:36.623205   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:47:36.623301   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.626305   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626634   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.626666   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.626856   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.627048   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627224   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.627349   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.627520   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:36.627929   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:36.627954   46683 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 20:47:36.963666   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 20:47:36.963695   46683 machine.go:91] provisioned docker machine in 952.57418ms
	I0626 20:47:36.963707   46683 start.go:300] post-start starting for "old-k8s-version-490377" (driver="kvm2")
	I0626 20:47:36.963719   46683 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 20:47:36.963747   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:36.964067   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 20:47:36.964099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:36.966948   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967352   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:36.967383   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:36.967528   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:36.967735   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:36.967900   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:36.968052   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.070309   46683 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 20:47:37.075040   46683 info.go:137] Remote host: Buildroot 2021.02.12
	I0626 20:47:37.075064   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/addons for local assets ...
	I0626 20:47:37.075125   46683 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-7242/.minikube/files for local assets ...
	I0626 20:47:37.075208   46683 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem -> 144432.pem in /etc/ssl/certs
	I0626 20:47:37.075306   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 20:47:37.086362   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:37.110475   46683 start.go:303] post-start completed in 146.752359ms
	I0626 20:47:37.110502   46683 fix.go:56] fixHost completed within 22.522880386s
	I0626 20:47:37.110525   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.113530   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.113925   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.113961   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.114168   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.114372   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114577   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.114730   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.114896   46683 main.go:141] libmachine: Using SSH client type: native
	I0626 20:47:37.115549   46683 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0626 20:47:37.115572   46683 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0626 20:47:37.247352   46683 main.go:141] libmachine: SSH cmd err, output: <nil>: 1687812457.183569581
	
	I0626 20:47:37.247376   46683 fix.go:206] guest clock: 1687812457.183569581
	I0626 20:47:37.247386   46683 fix.go:219] Guest: 2023-06-26 20:47:37.183569581 +0000 UTC Remote: 2023-06-26 20:47:37.110506986 +0000 UTC m=+360.350082215 (delta=73.062595ms)
	I0626 20:47:37.247410   46683 fix.go:190] guest clock delta is within tolerance: 73.062595ms
	I0626 20:47:37.247416   46683 start.go:83] releasing machines lock for "old-k8s-version-490377", held for 22.659832787s
	I0626 20:47:37.247442   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.247723   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:37.250740   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251154   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.251194   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.251316   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.251835   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252015   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:47:37.252101   46683 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 20:47:37.252144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.252251   46683 ssh_runner.go:195] Run: cat /version.json
	I0626 20:47:37.252273   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:47:37.255147   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255231   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255440   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255464   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255584   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.255756   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.255765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:37.255792   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:37.255930   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.255946   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:47:37.256080   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.256099   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:47:37.256206   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:47:37.256301   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:47:37.370571   46683 ssh_runner.go:195] Run: systemctl --version
	I0626 20:47:37.376548   46683 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 20:47:37.531359   46683 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0626 20:47:37.540038   46683 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0626 20:47:37.540104   46683 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 20:47:37.556531   46683 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 20:47:37.556554   46683 start.go:466] detecting cgroup driver to use...
	I0626 20:47:37.556620   46683 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 20:47:37.574430   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 20:47:37.586766   46683 docker.go:196] disabling cri-docker service (if available) ...
	I0626 20:47:37.586829   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 20:47:37.599572   46683 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 20:47:37.612901   46683 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 20:47:37.717489   46683 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 20:47:37.851503   46683 docker.go:212] disabling docker service ...
	I0626 20:47:37.851576   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 20:47:37.864932   46683 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 20:47:37.877087   46683 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 20:47:37.990007   46683 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 20:47:38.107613   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 20:47:38.122183   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 20:47:38.141502   46683 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0626 20:47:38.141567   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.152052   46683 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 20:47:38.152128   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.161786   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.172779   46683 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 20:47:38.182823   46683 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 20:47:38.192695   46683 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 20:47:38.201322   46683 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0626 20:47:38.201404   46683 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0626 20:47:38.213549   46683 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 20:47:38.225080   46683 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 20:47:38.336249   46683 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 20:47:38.508323   46683 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 20:47:38.508443   46683 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 20:47:38.514430   46683 start.go:534] Will wait 60s for crictl version
	I0626 20:47:38.514496   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:38.518918   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 20:47:38.559642   46683 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0626 20:47:38.559731   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.616720   46683 ssh_runner.go:195] Run: crio --version
	I0626 20:47:38.678573   46683 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0626 20:47:35.555132   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.053446   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:35.373039   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.872006   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:35.895929   47779 api_server.go:72] duration metric: took 2.064992302s to wait for apiserver process to appear ...
	I0626 20:47:35.895959   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:35.895982   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:35.896602   47779 api_server.go:269] stopped: https://192.168.61.238:8444/healthz: Get "https://192.168.61.238:8444/healthz": dial tcp 192.168.61.238:8444: connect: connection refused
	I0626 20:47:36.397305   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.868801   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.868839   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.868854   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.907251   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.907280   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:39.907310   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:39.921394   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:47:39.921428   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:47:40.397045   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.405040   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.405071   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:40.897690   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:40.904374   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0626 20:47:40.904424   47779 api_server.go:103] status: https://192.168.61.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0626 20:47:41.396883   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:47:41.404743   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:47:41.420191   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:47:41.420219   47779 api_server.go:131] duration metric: took 5.524252602s to wait for apiserver health ...
	I0626 20:47:41.420231   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:47:41.420249   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:41.422187   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:47:38.537628   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:40.538267   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:38.680019   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetIP
	I0626 20:47:38.682934   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683263   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:47:38.683294   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:47:38.683534   46683 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0626 20:47:38.687976   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:38.701534   46683 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 20:47:38.701610   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:38.739497   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:38.739584   46683 ssh_runner.go:195] Run: which lz4
	I0626 20:47:38.744080   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 20:47:38.748755   46683 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 20:47:38.748792   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0626 20:47:40.654759   46683 crio.go:444] Took 1.910714 seconds to copy over tarball
	I0626 20:47:40.654830   46683 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 20:47:40.057751   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:42.555707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:41.423617   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:47:41.447117   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:47:41.485897   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:47:41.505667   47779 system_pods.go:59] 8 kube-system pods found
	I0626 20:47:41.505714   47779 system_pods.go:61] "coredns-5d78c9869d-78zrr" [2927dce3-aa13-4ed4-b5a4-bc1b101ec044] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0626 20:47:41.505730   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [5bbba401-cfdd-4e97-ac44-3d1410344b23] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0626 20:47:41.505742   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [90d064bc-d31f-4690-b100-8979cdd518c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0626 20:47:41.505755   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [3f686efe-3c90-42ed-a1b9-2cda3e7e49b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0626 20:47:41.505773   47779 system_pods.go:61] "kube-proxy-7t2dk" [bebeb55d-8c7d-4543-9ee1-adbd946904f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0626 20:47:41.505786   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [c2436cf6-0128-425c-9db3-b3d01e5fb5e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0626 20:47:41.505799   47779 system_pods.go:61] "metrics-server-74d5c6b9c-swcxn" [81e42c6b-4c7d-40b1-bd4a-ccf7ce2dea17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:47:41.505811   47779 system_pods.go:61] "storage-provisioner" [18d1c7dc-00a6-4842-b441-f3468adde4ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0626 20:47:41.505822   47779 system_pods.go:74] duration metric: took 19.895923ms to wait for pod list to return data ...
	I0626 20:47:41.505833   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:47:41.515165   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:47:41.515201   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:47:41.515215   47779 node_conditions.go:105] duration metric: took 9.372368ms to run NodePressure ...
	I0626 20:47:41.515243   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:41.848353   47779 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854780   47779 kubeadm.go:787] kubelet initialised
	I0626 20:47:41.854805   47779 kubeadm.go:788] duration metric: took 6.420882ms waiting for restarted kubelet to initialise ...
	I0626 20:47:41.854814   47779 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:47:41.861323   47779 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.867181   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867214   47779 pod_ready.go:81] duration metric: took 5.86597ms waiting for pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.867225   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "coredns-5d78c9869d-78zrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.867235   47779 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.872900   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872928   47779 pod_ready.go:81] duration metric: took 5.684109ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.872940   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.872948   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.881471   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881501   47779 pod_ready.go:81] duration metric: took 8.543041ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.881513   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.881531   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:41.892246   47779 pod_ready.go:97] node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892292   47779 pod_ready.go:81] duration metric: took 10.741136ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	E0626 20:47:41.892310   47779 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-473235" hosting pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-473235" has status "Ready":"False"
	I0626 20:47:41.892325   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297272   47779 pod_ready.go:92] pod "kube-proxy-7t2dk" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:43.297299   47779 pod_ready.go:81] duration metric: took 1.404965565s waiting for pod "kube-proxy-7t2dk" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:43.297308   47779 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:42.544224   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.846930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:44.389432   46683 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.73456858s)
	I0626 20:47:44.389462   46683 crio.go:451] Took 3.734677 seconds to extract the tarball
	I0626 20:47:44.389480   46683 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 20:47:44.438169   46683 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 20:47:44.478220   46683 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0626 20:47:44.478250   46683 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 20:47:44.478337   46683 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.478364   46683 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.478383   46683 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.478384   46683 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.478450   46683 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0626 20:47:44.478365   46683 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.478345   46683 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.478339   46683 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479752   46683 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:44.479758   46683 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.479760   46683 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.479759   46683 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.479748   46683 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.479802   46683 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.479810   46683 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.479817   46683 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0626 20:47:44.681554   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720619   46683 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0626 20:47:44.720677   46683 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.720730   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.724810   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0626 20:47:44.753258   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0626 20:47:44.765072   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.767167   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.768723   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0626 20:47:44.769466   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.769474   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.807428   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.904206   46683 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0626 20:47:44.904243   46683 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0626 20:47:44.904250   46683 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.904261   46683 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.904295   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926166   46683 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0626 20:47:44.926203   46683 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.926204   46683 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0626 20:47:44.926222   46683 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.926222   46683 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0626 20:47:44.926248   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926247   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.926251   46683 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0626 20:47:44.926365   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936135   46683 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0626 20:47:44.936175   46683 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:44.936236   46683 ssh_runner.go:195] Run: which crictl
	I0626 20:47:44.936252   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0626 20:47:44.936274   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0626 20:47:44.940272   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0626 20:47:44.940352   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0626 20:47:44.940409   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0626 20:47:44.952147   46683 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0626 20:47:45.031640   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0626 20:47:45.031677   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0626 20:47:45.061947   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0626 20:47:45.062070   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0626 20:47:45.062166   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0626 20:47:45.062261   46683 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.062279   46683 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0626 20:47:45.067511   46683 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0626 20:47:45.067561   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0626 20:47:45.094726   46683 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.094780   46683 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0626 20:47:45.384887   46683 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:47:45.947601   46683 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0626 20:47:45.947707   46683 cache_images.go:92] LoadImages completed in 1.469441722s
	W0626 20:47:45.947778   46683 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-7242/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0626 20:47:45.947863   46683 ssh_runner.go:195] Run: crio config
	I0626 20:47:46.009928   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:47:46.009955   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:47:46.009968   46683 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 20:47:46.009987   46683 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490377 NodeName:old-k8s-version-490377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 20:47:46.010140   46683 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490377"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-490377
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.111:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 20:47:46.010224   46683 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490377 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 20:47:46.010284   46683 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0626 20:47:46.023111   46683 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 20:47:46.023196   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 20:47:46.034988   46683 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0626 20:47:46.056824   46683 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 20:47:46.077802   46683 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0626 20:47:46.102465   46683 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0626 20:47:46.107391   46683 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 20:47:46.121242   46683 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377 for IP: 192.168.72.111
	I0626 20:47:46.121277   46683 certs.go:190] acquiring lock for shared ca certs: {Name:mk9fe5873916c5e0cd7e508a2df682a4cedd3bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:47:46.121466   46683 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key
	I0626 20:47:46.121520   46683 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key
	I0626 20:47:46.121635   46683 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.key
	I0626 20:47:46.121735   46683 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key.760f2aeb
	I0626 20:47:46.121789   46683 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key
	I0626 20:47:46.121928   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem (1338 bytes)
	W0626 20:47:46.121970   46683 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443_empty.pem, impossibly tiny 0 bytes
	I0626 20:47:46.121985   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 20:47:46.122024   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem (1078 bytes)
	I0626 20:47:46.122063   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem (1123 bytes)
	I0626 20:47:46.122098   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/certs/home/jenkins/minikube-integration/16761-7242/.minikube/certs/key.pem (1675 bytes)
	I0626 20:47:46.122158   46683 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem (1708 bytes)
	I0626 20:47:46.123026   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 20:47:46.149101   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 20:47:46.179305   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 20:47:46.207421   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 20:47:46.233407   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 20:47:46.259148   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 20:47:46.284728   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 20:47:46.312152   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 20:47:46.341061   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/ssl/certs/144432.pem --> /usr/share/ca-certificates/144432.pem (1708 bytes)
	I0626 20:47:46.370455   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 20:47:46.398160   46683 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-7242/.minikube/certs/14443.pem --> /usr/share/ca-certificates/14443.pem (1338 bytes)
	I0626 20:47:46.424710   46683 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 20:47:46.446379   46683 ssh_runner.go:195] Run: openssl version
	I0626 20:47:46.452825   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14443.pem && ln -fs /usr/share/ca-certificates/14443.pem /etc/ssl/certs/14443.pem"
	I0626 20:47:46.466808   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472676   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 19:45 /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.472760   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14443.pem
	I0626 20:47:46.479077   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14443.pem /etc/ssl/certs/51391683.0"
	I0626 20:47:46.490061   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144432.pem && ln -fs /usr/share/ca-certificates/144432.pem /etc/ssl/certs/144432.pem"
	I0626 20:47:46.501801   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.506966   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 19:45 /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.507034   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144432.pem
	I0626 20:47:46.513146   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144432.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 20:47:46.523600   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 20:47:46.534659   46683 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540612   46683 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 19:36 /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.540677   46683 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 20:47:46.548499   46683 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 20:47:46.562786   46683 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 20:47:46.569679   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0626 20:47:46.576129   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0626 20:47:46.582331   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0626 20:47:46.588334   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0626 20:47:46.595635   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0626 20:47:46.603058   46683 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0626 20:47:46.611126   46683 kubeadm.go:404] StartCluster: {Name:old-k8s-version-490377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-490377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subn
et: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 20:47:46.611211   46683 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 20:47:46.611277   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:46.650099   46683 cri.go:89] found id: ""
	I0626 20:47:46.650177   46683 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 20:47:46.660940   46683 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0626 20:47:46.660964   46683 kubeadm.go:636] restartCluster start
	I0626 20:47:46.661022   46683 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0626 20:47:46.671400   46683 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:46.672450   46683 kubeconfig.go:92] found "old-k8s-version-490377" server: "https://192.168.72.111:8443"
	I0626 20:47:46.675477   46683 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0626 20:47:46.684496   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:46.684568   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:46.695719   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:45.056085   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.554295   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:45.865956   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:48.003697   47779 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.505286   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:47:49.505314   47779 pod_ready.go:81] duration metric: took 6.207998312s waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:49.505328   47779 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	I0626 20:47:47.037142   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:49.037207   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.535460   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:47.196149   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.196252   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.211751   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:47.696286   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:47.696381   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:47.707472   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.195967   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.196041   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.207809   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:48.696375   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:48.696449   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:48.708571   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.196097   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.196176   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.207717   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:49.696692   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:49.696768   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:49.708954   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.196531   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.196611   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.209111   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.696563   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:50.696648   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:50.708744   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.196237   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.196305   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.207654   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:51.695908   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:51.695988   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:51.708029   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:50.056186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.057083   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:51.519442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.520019   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:53.536833   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.036673   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:52.196170   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.196233   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.208953   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:52.696518   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:52.696600   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:52.707537   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.196046   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.196113   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.207272   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:53.695791   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:53.695873   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:53.706845   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.196452   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.196530   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.208048   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:54.696169   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:54.696236   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:54.707640   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.195889   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.195968   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.207560   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:55.695899   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:55.695978   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:55.707573   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.195900   46683 api_server.go:166] Checking apiserver status ...
	I0626 20:47:56.195973   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0626 20:47:56.207335   46683 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0626 20:47:56.685138   46683 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0626 20:47:56.685165   46683 kubeadm.go:1128] stopping kube-system containers ...
	I0626 20:47:56.685180   46683 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0626 20:47:56.685239   46683 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 20:47:56.719427   46683 cri.go:89] found id: ""
	I0626 20:47:56.719494   46683 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0626 20:47:56.735328   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:47:56.747355   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:47:56.747420   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756129   46683 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0626 20:47:56.756156   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:54.554213   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:57.052902   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:59.055349   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.018337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.025514   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:58.039195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.538216   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:47:56.883656   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.423073   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.641018   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.751205   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:47:57.840521   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:47:57.840645   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.355178   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:58.854929   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.355164   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:47:59.385611   46683 api_server.go:72] duration metric: took 1.545094971s to wait for apiserver process to appear ...
	I0626 20:47:59.385632   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:47:59.385650   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:01.553510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.554922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:00.520442   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.021809   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:03.040767   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.535801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:04.386860   46683 api_server.go:269] stopped: https://192.168.72.111:8443/healthz: Get "https://192.168.72.111:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0626 20:48:04.888001   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:05.958461   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0626 20:48:05.958486   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0626 20:48:05.958498   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.017029   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.017061   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.387577   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.394038   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.394072   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:06.887033   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:06.902891   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0626 20:48:06.902931   46683 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0626 20:48:07.387632   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:48:07.393827   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:48:07.402591   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:48:07.402618   46683 api_server.go:131] duration metric: took 8.016980167s to wait for apiserver health ...
	I0626 20:48:07.402628   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:48:07.402639   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:48:07.404494   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:48:06.054185   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:08.055165   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:05.520306   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.521293   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:10.021358   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.537058   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:09.537801   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:07.405919   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:48:07.416748   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:48:07.436249   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:48:07.445695   46683 system_pods.go:59] 7 kube-system pods found
	I0626 20:48:07.445732   46683 system_pods.go:61] "coredns-5644d7b6d9-5lcxw" [8e1a5fff-55d8-4d32-ae6f-c7694c8b5878] Running
	I0626 20:48:07.445741   46683 system_pods.go:61] "etcd-old-k8s-version-490377" [3fff7ab3-7ac7-4417-b3b8-9794f427c880] Running
	I0626 20:48:07.445750   46683 system_pods.go:61] "kube-apiserver-old-k8s-version-490377" [1b8e6b87-0b15-4586-8133-2dd33ac0b069] Running
	I0626 20:48:07.445771   46683 system_pods.go:61] "kube-controller-manager-old-k8s-version-490377" [2635a03c-884d-4245-a8ef-cb02e14443b8] Running
	I0626 20:48:07.445792   46683 system_pods.go:61] "kube-proxy-64btm" [0a8ee3c6-93a1-4989-94d0-209e8c655a64] Running
	I0626 20:48:07.445805   46683 system_pods.go:61] "kube-scheduler-old-k8s-version-490377" [2a6905a0-4f64-4cab-9b6d-55c708c07f8d] Running
	I0626 20:48:07.445815   46683 system_pods.go:61] "storage-provisioner" [9bf36874-b862-41f9-89d4-2d900adc2003] Running
	I0626 20:48:07.445826   46683 system_pods.go:74] duration metric: took 9.553318ms to wait for pod list to return data ...
	I0626 20:48:07.445836   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:48:07.450777   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:48:07.450816   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:48:07.450831   46683 node_conditions.go:105] duration metric: took 4.985221ms to run NodePressure ...
	I0626 20:48:07.450854   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0626 20:48:07.693070   46683 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0626 20:48:07.696336   46683 retry.go:31] will retry after 291.332727ms: kubelet not initialised
	I0626 20:48:07.992856   46683 retry.go:31] will retry after 210.561512ms: kubelet not initialised
	I0626 20:48:08.208369   46683 retry.go:31] will retry after 371.110023ms: kubelet not initialised
	I0626 20:48:08.585342   46683 retry.go:31] will retry after 1.199452561s: kubelet not initialised
	I0626 20:48:09.790625   46683 retry.go:31] will retry after 923.734482ms: kubelet not initialised
	I0626 20:48:10.719166   46683 retry.go:31] will retry after 1.019822632s: kubelet not initialised
	I0626 20:48:11.743554   46683 retry.go:31] will retry after 3.253867153s: kubelet not initialised
	I0626 20:48:10.552964   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.554534   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.520923   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.019384   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:12.036991   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:14.536734   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:15.002028   46683 retry.go:31] will retry after 2.234934883s: kubelet not initialised
	I0626 20:48:14.556223   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.053741   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.054276   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.021470   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.519794   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.036192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:19.036285   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:21.037136   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:17.242709   46683 retry.go:31] will retry after 6.079359776s: kubelet not initialised
	I0626 20:48:21.054851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.553653   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:22.020435   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:24.022102   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.037271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:25.037337   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:23.328332   46683 retry.go:31] will retry after 12.999865358s: kubelet not initialised
	I0626 20:48:25.553983   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.052253   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:26.518782   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:28.520217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:27.535792   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:29.536336   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:30.055419   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.553794   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:31.018773   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:33.020048   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:35.021492   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:32.036513   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:34.037364   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.535663   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:36.334795   46683 retry.go:31] will retry after 13.541680893s: kubelet not initialised
	I0626 20:48:35.052975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.053634   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.053672   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:37.519603   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:39.520279   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:38.536271   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:40.536344   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.553411   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.554235   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:41.520569   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:43.522354   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:42.536811   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.035291   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:45.554795   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.053080   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:46.019919   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:48.021534   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:47.036908   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.537386   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:49.882566   46683 kubeadm.go:787] kubelet initialised
	I0626 20:48:49.882597   46683 kubeadm.go:788] duration metric: took 42.189498896s waiting for restarted kubelet to initialise ...
	I0626 20:48:49.882608   46683 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:48:49.888018   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894462   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.894488   46683 pod_ready.go:81] duration metric: took 6.438689ms waiting for pod "coredns-5644d7b6d9-5lcxw" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.894501   46683 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899336   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.899358   46683 pod_ready.go:81] duration metric: took 4.848554ms waiting for pod "coredns-5644d7b6d9-xl5rg" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.899370   46683 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903574   46683 pod_ready.go:92] pod "etcd-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.903593   46683 pod_ready.go:81] duration metric: took 4.21548ms waiting for pod "etcd-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.903605   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908052   46683 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:49.908071   46683 pod_ready.go:81] duration metric: took 4.457812ms waiting for pod "kube-apiserver-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:49.908091   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281099   46683 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.281124   46683 pod_ready.go:81] duration metric: took 373.02512ms waiting for pod "kube-controller-manager-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.281139   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681520   46683 pod_ready.go:92] pod "kube-proxy-64btm" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:50.681541   46683 pod_ready.go:81] duration metric: took 400.395983ms waiting for pod "kube-proxy-64btm" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.681552   46683 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081638   46683 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace has status "Ready":"True"
	I0626 20:48:51.081657   46683 pod_ready.go:81] duration metric: took 400.09969ms waiting for pod "kube-scheduler-old-k8s-version-490377" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:51.081666   46683 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	I0626 20:48:50.053581   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.053802   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:50.520090   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.019821   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.020035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:52.037008   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.037516   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:56.037585   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:53.491534   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:55.989758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:54.552843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.054370   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:57.020770   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.520039   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.535930   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.536377   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:58.488491   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:00.489659   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:48:59.552927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.056474   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:01.520560   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.019945   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.536728   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.537724   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:02.989651   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.989796   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:04.552707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.553918   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:08.554230   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.520608   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.020075   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:07.036576   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.537071   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:06.990147   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:09.489229   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.053576   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:13.054110   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.519744   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.020968   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:12.037949   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.537389   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:11.989856   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:14.488429   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.490529   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:15.553553   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.054036   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:16.519975   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.520288   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:17.036172   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:19.036248   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.036421   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:18.989943   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.990154   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:20.553570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.554626   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:21.020817   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.520602   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:23.036595   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.038742   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:22.990299   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:24.994358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.053465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.053635   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:25.520912   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:28.020413   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.537294   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:27.489707   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.990957   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:29.552847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:31.554360   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.052585   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:30.520207   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.521484   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:35.020064   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.035666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.036325   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.535889   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:32.489468   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:34.989668   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.556092   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.054617   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:37.519850   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:40.020217   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.036499   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.537332   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:36.992357   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:39.489925   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.553528   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.052935   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:42.520450   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.520634   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.035299   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.036688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:41.990255   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:44.489449   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.553009   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.553560   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:47.018978   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.020289   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:48.535753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.536227   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:46.990710   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:49.490459   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:50.553710   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.054824   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.520532   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:54.027509   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:52.537108   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.036452   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:51.989608   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:53.990105   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.990610   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:55.552894   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.553520   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:56.519796   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.021401   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.537189   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:59.537365   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:49:57.991065   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.489396   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:00.053139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.062882   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:01.519625   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:03.520031   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.037036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.536157   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:02.988698   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.991107   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:04.551742   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:06.553955   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.053612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:05.520676   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:08.019671   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:10.021418   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.035613   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.036666   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.536861   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:07.488874   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:09.490059   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.492236   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:11.553481   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.054574   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:12.518824   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.519670   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:14.036399   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.537496   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:13.990228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.488219   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.054609   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.553511   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:16.519795   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.520535   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:19.037355   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.037964   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:18.488819   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:20.489536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.053521   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.553922   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:21.021035   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.519784   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:23.535974   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.536845   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:22.988574   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:24.990088   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:26.052017   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.054905   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:25.520011   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:28.019323   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.019500   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.537999   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.036187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:27.488859   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:29.990482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:30.551701   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.554272   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.019810   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.023728   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.036817   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.042849   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.536415   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:32.488492   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:34.491986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:35.053986   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:37.055115   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.520551   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.019307   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:38.537119   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:40.537474   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:36.991471   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.489241   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.490458   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:39.552836   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.553914   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:44.052850   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:41.020033   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.520646   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.036648   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:45.036959   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:43.990768   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.489482   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.053271   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.553811   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:46.018851   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.021042   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.021254   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:47.536099   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.036995   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:48.489670   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.990231   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:50.554677   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.053841   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.520067   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.021727   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:52.042201   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:54.536260   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:53.489402   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.492509   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:55.055031   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.055181   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.521342   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.020905   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.036992   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.037534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:01.538152   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:57.993709   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:00.488776   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:50:59.555263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.054478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.519672   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:05.020878   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.036330   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.036424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:02.489742   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.988712   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:04.555161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.052680   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.055326   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:07.519641   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:09.520120   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.536306   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:10.537094   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:06.988973   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:08.989715   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.488986   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:11.554973   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.054638   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.019264   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:14.020253   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:12.537126   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.037318   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:13.490053   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:15.988498   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.055193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:18.553665   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:16.522548   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.020609   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.536765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.037132   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:17.990230   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:19.991216   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:20.555044   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.055590   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:21.520052   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:23.520574   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:22.038085   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.535549   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.022544   47309 pod_ready.go:81] duration metric: took 4m0.000394525s waiting for pod "metrics-server-74d5c6b9c-7szm5" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:25.022570   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:25.022598   47309 pod_ready.go:38] duration metric: took 4m12.221722724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:25.022623   47309 kubeadm.go:640] restartCluster took 4m31.561880232s
	W0626 20:51:25.022684   47309 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:25.022722   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:22.489438   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:24.490731   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:25.554637   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:27.555070   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.020700   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.520337   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:26.990408   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:28.990900   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.490197   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:30.053627   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:31.041205   47605 pod_ready.go:81] duration metric: took 4m0.000945978s waiting for pod "metrics-server-74d5c6b9c-gb6b2" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:31.041235   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:31.041252   47605 pod_ready.go:38] duration metric: took 4m11.097608636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:31.041297   47605 kubeadm.go:640] restartCluster took 4m31.299321581s
	W0626 20:51:31.041365   47605 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:31.041409   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:31.019045   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.022453   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:33.492871   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.989984   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:35.520977   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:37.521128   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.021691   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:38.489349   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:40.989368   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.519812   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:44.520689   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:42.989461   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:45.491205   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:47.019936   47779 pod_ready.go:102] pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.506391   47779 pod_ready.go:81] duration metric: took 4m0.001048325s waiting for pod "metrics-server-74d5c6b9c-swcxn" in "kube-system" namespace to be "Ready" ...
	E0626 20:51:49.506423   47779 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:51:49.506441   47779 pod_ready.go:38] duration metric: took 4m7.651614118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:51:49.506483   47779 kubeadm.go:640] restartCluster took 4m26.997522391s
	W0626 20:51:49.506561   47779 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:51:49.506595   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:51:47.990134   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:49.990758   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:52.489144   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:54.990008   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:56.650050   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.627303734s)
	I0626 20:51:56.650132   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:51:56.665246   47309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:51:56.678749   47309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:51:56.690413   47309 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:51:56.690459   47309 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:51:56.757308   47309 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:51:56.757415   47309 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:51:56.915845   47309 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:51:56.916021   47309 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:51:56.916158   47309 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:51:57.137465   47309 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:51:57.139330   47309 out.go:204]   - Generating certificates and keys ...
	I0626 20:51:57.139431   47309 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:51:57.139514   47309 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:51:57.139648   47309 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:51:57.139718   47309 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:51:57.139852   47309 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:51:57.139914   47309 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:51:57.139997   47309 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:51:57.140101   47309 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:51:57.140224   47309 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:51:57.140830   47309 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:51:57.141343   47309 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:51:57.141471   47309 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:51:57.294061   47309 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:51:57.436714   47309 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:51:57.707612   47309 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:51:57.875383   47309 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:51:57.893698   47309 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:51:57.895257   47309 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:51:57.895427   47309 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:51:58.020261   47309 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:51:58.022209   47309 out.go:204]   - Booting up control plane ...
	I0626 20:51:58.022349   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:51:58.023359   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:51:58.024253   47309 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:51:58.026955   47309 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:51:58.032948   47309 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:51:57.489729   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:51:59.490578   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:01.491617   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:05.539291   47309 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505351 seconds
	I0626 20:52:05.539449   47309 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:05.564127   47309 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:06.097928   47309 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:06.098155   47309 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-934450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:06.617147   47309 kubeadm.go:322] [bootstrap-token] Using token: 7fs1fc.9teiyerfkduv7ctw
	I0626 20:52:03.989716   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.489773   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:06.618462   47309 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:06.618602   47309 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:06.631936   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:06.655354   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:06.662468   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:06.673817   47309 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:06.680979   47309 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:06.717394   47309 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:07.015067   47309 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:07.079315   47309 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:07.079362   47309 kubeadm.go:322] 
	I0626 20:52:07.079450   47309 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:07.079464   47309 kubeadm.go:322] 
	I0626 20:52:07.079544   47309 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:07.079556   47309 kubeadm.go:322] 
	I0626 20:52:07.079597   47309 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:07.079680   47309 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:07.079765   47309 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:07.079782   47309 kubeadm.go:322] 
	I0626 20:52:07.079867   47309 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:07.079880   47309 kubeadm.go:322] 
	I0626 20:52:07.079960   47309 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:07.079971   47309 kubeadm.go:322] 
	I0626 20:52:07.080038   47309 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:07.080123   47309 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:07.080233   47309 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:07.080249   47309 kubeadm.go:322] 
	I0626 20:52:07.080370   47309 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:07.080467   47309 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:07.080481   47309 kubeadm.go:322] 
	I0626 20:52:07.080574   47309 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.080692   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:07.080738   47309 kubeadm.go:322] 	--control-plane 
	I0626 20:52:07.080756   47309 kubeadm.go:322] 
	I0626 20:52:07.080858   47309 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:07.080870   47309 kubeadm.go:322] 
	I0626 20:52:07.080979   47309 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7fs1fc.9teiyerfkduv7ctw \
	I0626 20:52:07.081124   47309 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:07.082329   47309 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.082353   47309 cni.go:84] Creating CNI manager for ""
	I0626 20:52:07.082369   47309 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:07.084307   47309 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:07.804074   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (36.762635025s)
	I0626 20:52:07.804158   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:07.819772   47605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:07.830166   47605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:07.839585   47605 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:07.839633   47605 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:08.061341   47605 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:07.085644   47309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:07.113105   47309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:07.158420   47309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:07.158542   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.158590   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=no-preload-934450 minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:07.637925   47309 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:07.638078   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.262589   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.762326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.262326   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:09.762334   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.262485   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:10.762376   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:11.262645   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:08.490810   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:10.990521   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:11.762599   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.262690   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.762512   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.262844   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:13.762234   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.262587   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:14.762670   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.262293   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:15.763106   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:16.263264   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:12.991151   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:15.489549   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:19.659464   47605 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:19.659534   47605 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:19.659620   47605 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:19.659793   47605 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:19.659913   47605 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:19.659993   47605 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:19.661681   47605 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:19.661770   47605 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:19.661860   47605 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:19.661969   47605 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:19.662065   47605 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:19.662158   47605 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:19.662226   47605 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:19.662321   47605 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:19.662401   47605 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:19.662487   47605 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:19.662595   47605 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:19.662649   47605 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:19.662717   47605 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:19.662779   47605 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:19.662849   47605 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:19.662928   47605 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:19.663014   47605 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:19.663128   47605 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:19.663231   47605 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:19.663286   47605 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:19.663370   47605 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:19.664951   47605 out.go:204]   - Booting up control plane ...
	I0626 20:52:19.665063   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:19.665157   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:19.665246   47605 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:19.665347   47605 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:19.665554   47605 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:19.665662   47605 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504998 seconds
	I0626 20:52:19.665792   47605 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:19.665948   47605 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:19.666027   47605 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:19.666278   47605 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-299839 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:19.666360   47605 kubeadm.go:322] [bootstrap-token] Using token: e53kqf.6hnw5p7blg3e1mpb
	I0626 20:52:19.667988   47605 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:19.668104   47605 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:19.668203   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:19.668357   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:19.668500   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:19.668632   47605 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:19.668732   47605 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:19.668890   47605 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:19.668953   47605 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:19.669024   47605 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:19.669042   47605 kubeadm.go:322] 
	I0626 20:52:19.669122   47605 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:19.669135   47605 kubeadm.go:322] 
	I0626 20:52:19.669243   47605 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:19.669253   47605 kubeadm.go:322] 
	I0626 20:52:19.669284   47605 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:19.669392   47605 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:19.669472   47605 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:19.669483   47605 kubeadm.go:322] 
	I0626 20:52:19.669561   47605 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:19.669571   47605 kubeadm.go:322] 
	I0626 20:52:19.669642   47605 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:19.669661   47605 kubeadm.go:322] 
	I0626 20:52:19.669724   47605 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:19.669831   47605 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:19.669941   47605 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:19.669951   47605 kubeadm.go:322] 
	I0626 20:52:19.670055   47605 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:19.670169   47605 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:19.670179   47605 kubeadm.go:322] 
	I0626 20:52:19.670283   47605 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670428   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:19.670469   47605 kubeadm.go:322] 	--control-plane 
	I0626 20:52:19.670484   47605 kubeadm.go:322] 
	I0626 20:52:19.670588   47605 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:19.670603   47605 kubeadm.go:322] 
	I0626 20:52:19.670715   47605 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e53kqf.6hnw5p7blg3e1mpb \
	I0626 20:52:19.670850   47605 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:19.670863   47605 cni.go:84] Creating CNI manager for ""
	I0626 20:52:19.670875   47605 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:19.672750   47605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:16.762961   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.263008   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:17.762325   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.262618   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:18.762659   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.262343   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.763023   47309 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.932557   47309 kubeadm.go:1081] duration metric: took 12.774065652s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:19.932647   47309 kubeadm.go:406] StartCluster complete in 5m26.514862376s
	I0626 20:52:19.932687   47309 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.932796   47309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:19.935445   47309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:19.935820   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:19.936149   47309 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:19.936267   47309 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:19.936369   47309 addons.go:66] Setting storage-provisioner=true in profile "no-preload-934450"
	I0626 20:52:19.936388   47309 addons.go:228] Setting addon storage-provisioner=true in "no-preload-934450"
	W0626 20:52:19.936396   47309 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:19.936453   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.936890   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.936917   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.936996   47309 addons.go:66] Setting default-storageclass=true in profile "no-preload-934450"
	I0626 20:52:19.937022   47309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-934450"
	I0626 20:52:19.937178   47309 addons.go:66] Setting metrics-server=true in profile "no-preload-934450"
	I0626 20:52:19.937198   47309 addons.go:228] Setting addon metrics-server=true in "no-preload-934450"
	W0626 20:52:19.937206   47309 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:19.937259   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.937461   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937485   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.937664   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.937686   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.956754   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46867
	I0626 20:52:19.956777   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0626 20:52:19.956923   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I0626 20:52:19.957245   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957327   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957473   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.957897   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.957918   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958063   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958078   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958217   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.958240   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.958385   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959001   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.959029   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.959257   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959323   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.959523   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.960115   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.960168   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.980739   47309 addons.go:228] Setting addon default-storageclass=true in "no-preload-934450"
	W0626 20:52:19.980887   47309 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:19.980924   47309 host.go:66] Checking if "no-preload-934450" exists ...
	I0626 20:52:19.981308   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:19.981348   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:19.982528   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0626 20:52:19.982768   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0626 20:52:19.983398   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984115   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:19.984291   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.984303   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.984767   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985276   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:19.985294   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:19.985346   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.985720   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:19.985919   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:19.987605   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.989810   47309 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:19.991208   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:19.991229   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:19.991248   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:19.989487   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:19.997528   47309 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:19.996110   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:19.996135   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999411   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:19.999436   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:19.999495   47309 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:19.999511   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:19.999532   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.002886   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.003159   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.003321   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.004492   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.004806   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0626 20:52:20.004991   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.005018   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.005189   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.005234   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.005402   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.005568   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.005703   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.005881   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.005899   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.006233   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.006590   47309 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:20.006614   47309 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:20.022796   47309 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I0626 20:52:20.023252   47309 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:20.023827   47309 main.go:141] libmachine: Using API Version  1
	I0626 20:52:20.023852   47309 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:20.024209   47309 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:20.024425   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetState
	I0626 20:52:20.026279   47309 main.go:141] libmachine: (no-preload-934450) Calling .DriverName
	I0626 20:52:20.026527   47309 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.026542   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:20.026559   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHHostname
	I0626 20:52:20.029302   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029775   47309 main.go:141] libmachine: (no-preload-934450) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:d3:cf", ip: ""} in network mk-no-preload-934450: {Iface:virbr2 ExpiryTime:2023-06-26 21:39:40 +0000 UTC Type:0 Mac:52:54:00:cf:d3:cf Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:no-preload-934450 Clientid:01:52:54:00:cf:d3:cf}
	I0626 20:52:20.029804   47309 main.go:141] libmachine: (no-preload-934450) DBG | domain no-preload-934450 has defined IP address 192.168.50.38 and MAC address 52:54:00:cf:d3:cf in network mk-no-preload-934450
	I0626 20:52:20.029944   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHPort
	I0626 20:52:20.030138   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHKeyPath
	I0626 20:52:20.030321   47309 main.go:141] libmachine: (no-preload-934450) Calling .GetSSHUsername
	I0626 20:52:20.030454   47309 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/no-preload-934450/id_rsa Username:docker}
	I0626 20:52:20.331846   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:20.341298   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:20.352664   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:20.352693   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:20.376961   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:20.420573   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:20.420599   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:20.495388   47309 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-934450" context rescaled to 1 replicas
	I0626 20:52:20.495436   47309 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:20.497711   47309 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:20.499512   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:20.560528   47309 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:20.560559   47309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:20.647734   47309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:21.308936   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.802312904s)
	I0626 20:52:21.309013   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:21.323340   47779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:52:21.333741   47779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:52:21.346686   47779 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:52:21.346741   47779 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0626 20:52:21.427299   47779 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 20:52:21.427431   47779 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:52:21.598474   47779 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:52:21.598609   47779 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:52:21.598727   47779 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:52:21.802443   47779 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:52:17.989506   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:20.002885   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:21.804179   47779 out.go:204]   - Generating certificates and keys ...
	I0626 20:52:21.804277   47779 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:52:21.804985   47779 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:52:21.805576   47779 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:52:21.806465   47779 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:52:21.807206   47779 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:52:21.807988   47779 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:52:21.808775   47779 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:52:21.809427   47779 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:52:21.810136   47779 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:52:21.810809   47779 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:52:21.811489   47779 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:52:21.811563   47779 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:52:22.127084   47779 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:52:22.371731   47779 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:52:22.635165   47779 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:52:22.843347   47779 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:52:22.866673   47779 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:52:22.868080   47779 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:52:22.868259   47779 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 20:52:23.015798   47779 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:52:22.468922   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.137025983s)
	I0626 20:52:22.468974   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.468988   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469285   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469339   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469359   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469390   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469315   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:22.469630   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469649   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:22.469669   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:22.469678   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:22.469900   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:22.469915   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597030   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.255690675s)
	I0626 20:52:23.597078   47309 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.220078989s)
	I0626 20:52:23.597104   47309 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:23.597084   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597131   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597130   47309 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.097584802s)
	I0626 20:52:23.597162   47309 node_ready.go:35] waiting up to 6m0s for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.597463   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597463   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597489   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.597499   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.597508   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.597879   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.597931   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.597950   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632416   47309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.984627683s)
	I0626 20:52:23.632472   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632485   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.632907   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.632919   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.632940   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.632967   47309 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:23.632982   47309 main.go:141] libmachine: (no-preload-934450) Calling .Close
	I0626 20:52:23.633279   47309 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:23.633297   47309 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:23.633307   47309 addons.go:464] Verifying addon metrics-server=true in "no-preload-934450"
	I0626 20:52:23.633353   47309 main.go:141] libmachine: (no-preload-934450) DBG | Closing plugin on server side
	I0626 20:52:23.635198   47309 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:19.674407   47605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:19.702224   47605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:19.744577   47605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.744665   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=embed-certs-299839 minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:19.783628   47605 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:20.149671   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:20.782659   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.283295   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:21.782574   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.283137   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:22.782766   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.282641   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.783459   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:23.017432   47779 out.go:204]   - Booting up control plane ...
	I0626 20:52:23.017573   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:52:23.019187   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:52:23.020097   47779 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:52:23.023559   47779 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:52:23.025808   47779 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:52:23.636740   47309 addons.go:499] enable addons completed in 3.700468963s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:23.637657   47309 node_ready.go:49] node "no-preload-934450" has status "Ready":"True"
	I0626 20:52:23.637673   47309 node_ready.go:38] duration metric: took 40.495678ms waiting for node "no-preload-934450" to be "Ready" ...
	I0626 20:52:23.637684   47309 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:23.676466   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:25.699614   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:22.489080   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.490209   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:24.282506   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:24.782560   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.282565   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:25.783022   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.282856   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:26.783243   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.282657   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:27.783258   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.282802   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:28.783019   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.283285   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:29.782968   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.282489   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:30.782763   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.283126   47605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:31.445729   47605 kubeadm.go:1081] duration metric: took 11.701128618s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:31.445766   47605 kubeadm.go:406] StartCluster complete in 5m31.748710798s
	I0626 20:52:31.445787   47605 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.445873   47605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:31.448427   47605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:31.448700   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:31.448792   47605 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:31.448866   47605 addons.go:66] Setting storage-provisioner=true in profile "embed-certs-299839"
	I0626 20:52:31.448871   47605 addons.go:66] Setting default-storageclass=true in profile "embed-certs-299839"
	I0626 20:52:31.448884   47605 addons.go:228] Setting addon storage-provisioner=true in "embed-certs-299839"
	I0626 20:52:31.448885   47605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-299839"
	W0626 20:52:31.448892   47605 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:31.448938   47605 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:31.448948   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.448986   47605 addons.go:66] Setting metrics-server=true in profile "embed-certs-299839"
	I0626 20:52:31.449006   47605 addons.go:228] Setting addon metrics-server=true in "embed-certs-299839"
	W0626 20:52:31.449013   47605 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:31.449053   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449306   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.449762   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450455   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.450635   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.450708   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.468787   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0626 20:52:31.469015   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0626 20:52:31.469401   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469497   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.469929   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.469947   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470036   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.470073   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.470548   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470605   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.470723   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0626 20:52:31.470915   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.471202   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.471236   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.471374   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.471846   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.471871   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.481862   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.482471   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.482499   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.492391   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0626 20:52:31.493190   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.493807   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.493833   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.494190   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.494347   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.496376   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.499801   47605 addons.go:228] Setting addon default-storageclass=true in "embed-certs-299839"
	W0626 20:52:31.499822   47605 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:31.499851   47605 host.go:66] Checking if "embed-certs-299839" exists ...
	I0626 20:52:31.500224   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.500253   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.506027   47605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:31.507267   47605 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.507286   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:31.507306   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.507954   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0626 20:52:31.508919   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.509350   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.509364   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.509784   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.510070   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.511452   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.513168   47605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:28.196489   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:30.196782   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:26.989644   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:29.488966   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.506536   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:31.511805   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.512430   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.514510   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.514522   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:31.514530   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.514536   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:31.514555   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.514709   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.514860   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.515029   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.517249   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517628   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.517653   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.517774   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.517948   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.518282   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.518454   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.522114   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0626 20:52:31.522433   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.522982   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.523010   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.523416   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.523984   47605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:31.524019   47605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:31.545037   47605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0626 20:52:31.545523   47605 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:31.546109   47605 main.go:141] libmachine: Using API Version  1
	I0626 20:52:31.546140   47605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:31.546551   47605 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:31.546826   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetState
	I0626 20:52:31.549289   47605 main.go:141] libmachine: (embed-certs-299839) Calling .DriverName
	I0626 20:52:31.549597   47605 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.549618   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:31.549638   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHHostname
	I0626 20:52:31.553457   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553713   47605 main.go:141] libmachine: (embed-certs-299839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:e6:45", ip: ""} in network mk-embed-certs-299839: {Iface:virbr1 ExpiryTime:2023-06-26 21:46:45 +0000 UTC Type:0 Mac:52:54:00:d6:e6:45 Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:embed-certs-299839 Clientid:01:52:54:00:d6:e6:45}
	I0626 20:52:31.553744   47605 main.go:141] libmachine: (embed-certs-299839) DBG | domain embed-certs-299839 has defined IP address 192.168.39.51 and MAC address 52:54:00:d6:e6:45 in network mk-embed-certs-299839
	I0626 20:52:31.553798   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHPort
	I0626 20:52:31.553995   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHKeyPath
	I0626 20:52:31.554131   47605 main.go:141] libmachine: (embed-certs-299839) Calling .GetSSHUsername
	I0626 20:52:31.554284   47605 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/embed-certs-299839/id_rsa Username:docker}
	I0626 20:52:31.693230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:31.713818   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:31.718654   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:31.718682   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:31.734681   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:31.767394   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:31.767424   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:31.884424   47605 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:31.884443   47605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:31.961893   47605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:32.055887   47605 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-299839" context rescaled to 1 replicas
	I0626 20:52:32.055933   47605 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.51 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:32.058697   47605 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:32.530480   47779 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.504525 seconds
	I0626 20:52:32.530633   47779 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:52:32.556112   47779 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:52:33.096104   47779 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:52:33.096372   47779 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-473235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 20:52:33.615425   47779 kubeadm.go:322] [bootstrap-token] Using token: fvy9dh.hbeabw0ufqdnf2rd
	I0626 20:52:33.617480   47779 out.go:204]   - Configuring RBAC rules ...
	I0626 20:52:33.617622   47779 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:52:33.630158   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 20:52:33.641973   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:52:33.649480   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:52:33.657736   47779 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:52:33.663093   47779 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:52:33.698108   47779 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 20:52:34.017843   47779 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:52:34.069498   47779 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:52:34.070500   47779 kubeadm.go:322] 
	I0626 20:52:34.070587   47779 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:52:34.070600   47779 kubeadm.go:322] 
	I0626 20:52:34.070691   47779 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:52:34.070705   47779 kubeadm.go:322] 
	I0626 20:52:34.070734   47779 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:52:34.070809   47779 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:52:34.070915   47779 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:52:34.070952   47779 kubeadm.go:322] 
	I0626 20:52:34.071047   47779 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 20:52:34.071060   47779 kubeadm.go:322] 
	I0626 20:52:34.071114   47779 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 20:52:34.071124   47779 kubeadm.go:322] 
	I0626 20:52:34.071183   47779 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:52:34.071276   47779 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:52:34.071360   47779 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:52:34.071369   47779 kubeadm.go:322] 
	I0626 20:52:34.071454   47779 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 20:52:34.071550   47779 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:52:34.071558   47779 kubeadm.go:322] 
	I0626 20:52:34.071677   47779 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.071823   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:52:34.071852   47779 kubeadm.go:322] 	--control-plane 
	I0626 20:52:34.071860   47779 kubeadm.go:322] 
	I0626 20:52:34.071961   47779 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:52:34.071973   47779 kubeadm.go:322] 
	I0626 20:52:34.072075   47779 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token fvy9dh.hbeabw0ufqdnf2rd \
	I0626 20:52:34.072202   47779 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:52:34.072734   47779 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:52:34.072775   47779 cni.go:84] Creating CNI manager for ""
	I0626 20:52:34.072794   47779 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:52:34.074659   47779 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:52:32.060653   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:33.969636   47605 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.276366101s)
	I0626 20:52:33.969679   47605 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:34.114443   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.400580422s)
	I0626 20:52:34.114587   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114636   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114483   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.379765696s)
	I0626 20:52:34.114695   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.114714   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.114993   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115036   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115049   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.115059   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.115068   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.115386   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.115394   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.115458   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117682   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.117720   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.117736   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.117754   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.117764   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.119184   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.119204   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.119218   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.119238   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.119253   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.120750   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.120787   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.120800   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.800635   47605 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.739945617s)
	I0626 20:52:34.800672   47605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.838732117s)
	I0626 20:52:34.800721   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.800740   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.800674   47605 node_ready.go:35] waiting up to 6m0s for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.801059   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.801086   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.801103   47605 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:34.801112   47605 main.go:141] libmachine: (embed-certs-299839) Calling .Close
	I0626 20:52:34.802733   47605 main.go:141] libmachine: (embed-certs-299839) DBG | Closing plugin on server side
	I0626 20:52:34.802767   47605 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:34.802781   47605 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:34.802798   47605 addons.go:464] Verifying addon metrics-server=true in "embed-certs-299839"
	I0626 20:52:34.804616   47605 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:52:34.076233   47779 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:52:34.097578   47779 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:52:34.126294   47779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:52:34.126351   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.126361   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=default-k8s-diff-port-473235 minikube.k8s.io/updated_at=2023_06_26T20_52_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:34.672738   47779 ops.go:34] apiserver oom_adj: -16
	I0626 20:52:34.672886   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:32.196979   47309 pod_ready.go:102] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.198202   47309 pod_ready.go:97] pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminat
ed{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198243   47309 pod_ready.go:81] duration metric: took 10.521748073s waiting for pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:34.198256   47309 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-k8r6j" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 20:52:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.38 PodIP: PodIPs:[] StartTime:2023-06-26 20:52:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Termi
nated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-26 20:52:23 +0000 UTC,FinishedAt:2023-06-26 20:52:33 +0000 UTC,ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3 ContainerID:cri-o://b24a94b15e524bf6bc2546ee0b1b02381f9d5add258b0dcaa1c1816513ec6f71 Started:0xc0006f2400 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 20:52:34.198265   47309 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208718   47309 pod_ready.go:92] pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.208751   47309 pod_ready.go:81] duration metric: took 10.474456ms waiting for pod "coredns-5d78c9869d-xm96k" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.208765   47309 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216757   47309 pod_ready.go:92] pod "etcd-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.216787   47309 pod_ready.go:81] duration metric: took 8.014039ms waiting for pod "etcd-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.216800   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226840   47309 pod_ready.go:92] pod "kube-apiserver-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.226862   47309 pod_ready.go:81] duration metric: took 10.054474ms waiting for pod "kube-apiserver-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.226875   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234229   47309 pod_ready.go:92] pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.234252   47309 pod_ready.go:81] duration metric: took 7.369366ms waiting for pod "kube-controller-manager-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.234265   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603958   47309 pod_ready.go:92] pod "kube-proxy-jhz99" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.603985   47309 pod_ready.go:81] duration metric: took 369.712585ms waiting for pod "kube-proxy-jhz99" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.603999   47309 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.992990   47309 pod_ready.go:92] pod "kube-scheduler-no-preload-934450" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:34.993018   47309 pod_ready.go:81] duration metric: took 389.011206ms waiting for pod "kube-scheduler-no-preload-934450" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:34.993033   47309 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:33.991358   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:36.489561   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:34.806005   47605 addons.go:499] enable addons completed in 3.357208024s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:52:34.826098   47605 node_ready.go:49] node "embed-certs-299839" has status "Ready":"True"
	I0626 20:52:34.826123   47605 node_ready.go:38] duration metric: took 25.328707ms waiting for node "embed-certs-299839" to be "Ready" ...
	I0626 20:52:34.826131   47605 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:34.853293   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388894   47605 pod_ready.go:92] pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.388921   47605 pod_ready.go:81] duration metric: took 1.535604079s waiting for pod "coredns-5d78c9869d-bv29x" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.388931   47605 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397936   47605 pod_ready.go:92] pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.397962   47605 pod_ready.go:81] duration metric: took 9.024703ms waiting for pod "coredns-5d78c9869d-tl42z" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.397978   47605 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409066   47605 pod_ready.go:92] pod "etcd-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.409098   47605 pod_ready.go:81] duration metric: took 11.112746ms waiting for pod "etcd-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.409111   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419292   47605 pod_ready.go:92] pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.419313   47605 pod_ready.go:81] duration metric: took 10.193966ms waiting for pod "kube-apiserver-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.419322   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429116   47605 pod_ready.go:92] pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:36.429140   47605 pod_ready.go:81] duration metric: took 9.812044ms waiting for pod "kube-controller-manager-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:36.429154   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316268   47605 pod_ready.go:92] pod "kube-proxy-scfwr" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.316318   47605 pod_ready.go:81] duration metric: took 887.155494ms waiting for pod "kube-proxy-scfwr" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.316334   47605 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605351   47605 pod_ready.go:92] pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:37.605394   47605 pod_ready.go:81] duration metric: took 289.052198ms waiting for pod "kube-scheduler-embed-certs-299839" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:37.605409   47605 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:35.287764   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:35.787902   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.287089   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:36.786922   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.287932   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.787255   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.287820   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:38.786891   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.287467   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:39.787282   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:37.400022   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:39.401566   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:41.404969   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:38.491696   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.990293   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.013927   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:42.518436   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:40.287734   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:40.786949   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.287187   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:41.787722   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.287098   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:42.787623   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.287242   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:43.787224   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.287339   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:44.787760   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.287273   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:45.787052   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.287810   47779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:52:46.436665   47779 kubeadm.go:1081] duration metric: took 12.310369141s to wait for elevateKubeSystemPrivileges.
	I0626 20:52:46.436696   47779 kubeadm.go:406] StartCluster complete in 5m23.972219662s
	I0626 20:52:46.436715   47779 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.436798   47779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:52:46.438623   47779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:52:46.438897   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:52:46.439016   47779 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:52:46.439110   47779 addons.go:66] Setting storage-provisioner=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439117   47779 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:52:46.439128   47779 addons.go:66] Setting default-storageclass=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439166   47779 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-473235"
	I0626 20:52:46.439128   47779 addons.go:228] Setting addon storage-provisioner=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439240   47779 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:52:46.439285   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439133   47779 addons.go:66] Setting metrics-server=true in profile "default-k8s-diff-port-473235"
	I0626 20:52:46.439336   47779 addons.go:228] Setting addon metrics-server=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.439346   47779 addons.go:237] addon metrics-server should already be in state true
	I0626 20:52:46.439383   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.439663   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439691   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439694   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439717   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.439733   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.439754   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.456038   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0626 20:52:46.456227   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0626 20:52:46.456533   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.456769   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.457072   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457092   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457258   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.457280   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.457413   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457749   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.457902   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.459751   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0626 20:52:46.460296   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.460326   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.460688   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.462951   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.462975   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.463384   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.463981   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.464006   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.477368   47779 addons.go:228] Setting addon default-storageclass=true in "default-k8s-diff-port-473235"
	W0626 20:52:46.477472   47779 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:52:46.477516   47779 host.go:66] Checking if "default-k8s-diff-port-473235" exists ...
	I0626 20:52:46.477987   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.478063   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.479865   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0626 20:52:46.480358   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.480932   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.480951   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.481335   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.482608   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0626 20:52:46.482630   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.482982   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.483505   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.483521   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.483907   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.484123   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.485234   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.487634   47779 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:52:46.486430   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.488916   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:52:46.488938   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:52:46.488959   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.490698   47779 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:52:43.900514   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.900540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:43.488701   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:45.992735   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:46.491860   47779 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.491875   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:52:46.491893   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.492950   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.493834   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.493855   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.494361   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.494827   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.494987   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.495130   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.496109   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.496170   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496192   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.496213   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.496294   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.496444   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.496549   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.502119   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0626 20:52:46.502456   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.502898   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.502916   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.503203   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.503723   47779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:52:46.503747   47779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:52:46.522597   47779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0626 20:52:46.523240   47779 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:52:46.523892   47779 main.go:141] libmachine: Using API Version  1
	I0626 20:52:46.523912   47779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:52:46.524423   47779 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:52:46.524674   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetState
	I0626 20:52:46.526567   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .DriverName
	I0626 20:52:46.528682   47779 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.528699   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:52:46.528721   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHHostname
	I0626 20:52:46.531983   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532450   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:62:a8", ip: ""} in network mk-default-k8s-diff-port-473235: {Iface:virbr4 ExpiryTime:2023-06-26 21:47:05 +0000 UTC Type:0 Mac:52:54:00:89:62:a8 Iaid: IPaddr:192.168.61.238 Prefix:24 Hostname:default-k8s-diff-port-473235 Clientid:01:52:54:00:89:62:a8}
	I0626 20:52:46.532477   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | domain default-k8s-diff-port-473235 has defined IP address 192.168.61.238 and MAC address 52:54:00:89:62:a8 in network mk-default-k8s-diff-port-473235
	I0626 20:52:46.532785   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHPort
	I0626 20:52:46.533992   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHKeyPath
	I0626 20:52:46.534158   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .GetSSHUsername
	I0626 20:52:46.534302   47779 sshutil.go:53] new ssh client: &{IP:192.168.61.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/default-k8s-diff-port-473235/id_rsa Username:docker}
	I0626 20:52:46.698636   47779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:52:46.819666   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:52:46.915074   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:52:46.918133   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:52:46.918161   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:52:47.006856   47779 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-473235" context rescaled to 1 replicas
	I0626 20:52:47.006907   47779 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.238 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:52:47.008746   47779 out.go:177] * Verifying Kubernetes components...
	I0626 20:52:45.013051   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.014722   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:47.010273   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:47.015003   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:52:47.015022   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:52:47.099554   47779 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:47.099583   47779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:52:47.162192   47779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:52:48.848078   47779 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.149396252s)
	I0626 20:52:48.848110   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.028412306s)
	I0626 20:52:48.848145   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848157   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848112   47779 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0626 20:52:48.848418   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848438   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848440   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848448   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848460   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848678   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848699   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:48.848712   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:48.848715   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:48.848722   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:48.848936   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:48.848959   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.142482   47779 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.13217662s)
	I0626 20:52:49.142522   47779 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.142664   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.227563186s)
	I0626 20:52:49.142706   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.142723   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143018   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143037   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143047   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.143055   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.143135   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.143309   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.143402   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.143378   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) DBG | Closing plugin on server side
	I0626 20:52:49.230635   47779 node_ready.go:49] node "default-k8s-diff-port-473235" has status "Ready":"True"
	I0626 20:52:49.230663   47779 node_ready.go:38] duration metric: took 88.12938ms waiting for node "default-k8s-diff-port-473235" to be "Ready" ...
	I0626 20:52:49.230688   47779 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:49.248094   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:49.857182   47779 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.694948259s)
	I0626 20:52:49.857243   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857254   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857552   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857569   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857579   47779 main.go:141] libmachine: Making call to close driver server
	I0626 20:52:49.857588   47779 main.go:141] libmachine: (default-k8s-diff-port-473235) Calling .Close
	I0626 20:52:49.857816   47779 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:52:49.857836   47779 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:52:49.857847   47779 addons.go:464] Verifying addon metrics-server=true in "default-k8s-diff-port-473235"
	I0626 20:52:49.859648   47779 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0626 20:52:49.860902   47779 addons.go:499] enable addons completed in 3.421885216s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0626 20:52:47.901422   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.402347   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:48.490248   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:50.991228   46683 pod_ready.go:102] pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.082154   46683 pod_ready.go:81] duration metric: took 4m0.000473504s waiting for pod "metrics-server-74d5856cc6-985dp" in "kube-system" namespace to be "Ready" ...
	E0626 20:52:51.082180   46683 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:52:51.082198   46683 pod_ready.go:38] duration metric: took 4m1.199581008s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:51.082227   46683 kubeadm.go:640] restartCluster took 5m4.421255564s
	W0626 20:52:51.082286   46683 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0626 20:52:51.082319   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0626 20:52:50.897742   47779 pod_ready.go:92] pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.897765   47779 pod_ready.go:81] duration metric: took 1.649649958s waiting for pod "coredns-5d78c9869d-bfqmv" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.897777   47779 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.924988   47779 pod_ready.go:92] pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.925007   47779 pod_ready.go:81] duration metric: took 27.222965ms waiting for pod "coredns-5d78c9869d-q7zms" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.925016   47779 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942760   47779 pod_ready.go:92] pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.942781   47779 pod_ready.go:81] duration metric: took 17.75819ms waiting for pod "etcd-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.942790   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956204   47779 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.956224   47779 pod_ready.go:81] duration metric: took 13.428405ms waiting for pod "kube-apiserver-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.956235   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964542   47779 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:50.964569   47779 pod_ready.go:81] duration metric: took 8.32705ms waiting for pod "kube-controller-manager-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:50.964581   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791355   47779 pod_ready.go:92] pod "kube-proxy-k4hzc" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:51.791376   47779 pod_ready.go:81] duration metric: took 826.787812ms waiting for pod "kube-proxy-k4hzc" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:51.791384   47779 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078670   47779 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace has status "Ready":"True"
	I0626 20:52:52.078700   47779 pod_ready.go:81] duration metric: took 287.306474ms waiting for pod "kube-scheduler-default-k8s-diff-port-473235" in "kube-system" namespace to be "Ready" ...
	I0626 20:52:52.078714   47779 pod_ready.go:38] duration metric: took 2.848014299s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:52:52.078733   47779 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:52:52.078789   47779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:52:52.094414   47779 api_server.go:72] duration metric: took 5.08747775s to wait for apiserver process to appear ...
	I0626 20:52:52.094444   47779 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:52:52.094468   47779 api_server.go:253] Checking apiserver healthz at https://192.168.61.238:8444/healthz ...
	I0626 20:52:52.101300   47779 api_server.go:279] https://192.168.61.238:8444/healthz returned 200:
	ok
	I0626 20:52:52.102682   47779 api_server.go:141] control plane version: v1.27.3
	I0626 20:52:52.102703   47779 api_server.go:131] duration metric: took 8.250707ms to wait for apiserver health ...
	I0626 20:52:52.102712   47779 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:52:52.283428   47779 system_pods.go:59] 9 kube-system pods found
	I0626 20:52:52.283459   47779 system_pods.go:61] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.283467   47779 system_pods.go:61] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.283474   47779 system_pods.go:61] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.283482   47779 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.283488   47779 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.283493   47779 system_pods.go:61] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.283500   47779 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.283511   47779 system_pods.go:61] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.283519   47779 system_pods.go:61] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.283527   47779 system_pods.go:74] duration metric: took 180.810034ms to wait for pod list to return data ...
	I0626 20:52:52.283540   47779 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:52:52.478374   47779 default_sa.go:45] found service account: "default"
	I0626 20:52:52.478400   47779 default_sa.go:55] duration metric: took 194.853163ms for default service account to be created ...
	I0626 20:52:52.478418   47779 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:52:52.683697   47779 system_pods.go:86] 9 kube-system pods found
	I0626 20:52:52.683724   47779 system_pods.go:89] "coredns-5d78c9869d-bfqmv" [799f00be-7a8e-47ea-841f-93ba8ff58f56] Running
	I0626 20:52:52.683730   47779 system_pods.go:89] "coredns-5d78c9869d-q7zms" [86e16893-4f35-4d11-8346-81fee8cb607a] Running
	I0626 20:52:52.683735   47779 system_pods.go:89] "etcd-default-k8s-diff-port-473235" [c137e87d-3f4e-4147-b4b6-05778466b672] Running
	I0626 20:52:52.683740   47779 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-473235" [ed4a59a1-2f0f-43aa-b51b-89bf590486b4] Running
	I0626 20:52:52.683745   47779 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-473235" [ea1201b5-2cdb-4721-b853-0c6ef93970a3] Running
	I0626 20:52:52.683748   47779 system_pods.go:89] "kube-proxy-k4hzc" [036703e4-59a2-4be1-84ad-621e52766052] Running
	I0626 20:52:52.683752   47779 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-473235" [a639afa7-3284-47cc-b131-991f7eb5daf0] Running
	I0626 20:52:52.683761   47779 system_pods.go:89] "metrics-server-74d5c6b9c-8qcw9" [b81a167a-fb12-4a9c-89ae-93ff6474dc30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:52:52.683773   47779 system_pods.go:89] "storage-provisioner" [0ff5c6fb-2917-4a8a-a33a-20631ff9fc1f] Running
	I0626 20:52:52.683789   47779 system_pods.go:126] duration metric: took 205.364587ms to wait for k8s-apps to be running ...
	I0626 20:52:52.683798   47779 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:52:52.683846   47779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:52:52.698439   47779 system_svc.go:56] duration metric: took 14.634482ms WaitForService to wait for kubelet.
	I0626 20:52:52.698463   47779 kubeadm.go:581] duration metric: took 5.691531199s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:52:52.698480   47779 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:52:52.879414   47779 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:52:52.879441   47779 node_conditions.go:123] node cpu capacity is 2
	I0626 20:52:52.879454   47779 node_conditions.go:105] duration metric: took 180.969761ms to run NodePressure ...
	I0626 20:52:52.879466   47779 start.go:228] waiting for startup goroutines ...
	I0626 20:52:52.879473   47779 start.go:233] waiting for cluster config update ...
	I0626 20:52:52.879484   47779 start.go:242] writing updated cluster config ...
	I0626 20:52:52.879748   47779 ssh_runner.go:195] Run: rm -f paused
	I0626 20:52:52.928982   47779 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:52:52.930701   47779 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-473235" cluster and "default" namespace by default
	I0626 20:52:49.513843   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:51.515851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:54.013443   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:52.901965   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:55.400541   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:56.014186   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:58.516445   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:57.900857   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:52:59.901944   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:01.013089   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:03.015510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:02.400534   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:04.400691   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:06.401897   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:05.513529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.013510   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:08.901751   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:11.400891   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:10.513562   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:12.515529   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:13.900503   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:15.900570   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:14.208647   46683 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (23.126299276s)
	I0626 20:53:14.208727   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:14.222919   46683 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 20:53:14.234762   46683 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 20:53:14.244800   46683 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 20:53:14.244840   46683 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0626 20:53:14.465786   46683 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 20:53:15.014781   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.017400   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:17.901367   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:20.401697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:19.515459   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.015763   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:22.900407   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:24.901270   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.255771   46683 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0626 20:53:27.255867   46683 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 20:53:27.255968   46683 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 20:53:27.256115   46683 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 20:53:27.256237   46683 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 20:53:27.256368   46683 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 20:53:27.256489   46683 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 20:53:27.256550   46683 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0626 20:53:27.256604   46683 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 20:53:27.258050   46683 out.go:204]   - Generating certificates and keys ...
	I0626 20:53:27.258140   46683 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 20:53:27.258235   46683 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 20:53:27.258357   46683 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0626 20:53:27.258441   46683 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0626 20:53:27.258554   46683 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0626 20:53:27.258611   46683 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0626 20:53:27.258665   46683 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0626 20:53:27.258737   46683 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0626 20:53:27.258832   46683 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0626 20:53:27.258907   46683 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0626 20:53:27.258954   46683 kubeadm.go:322] [certs] Using the existing "sa" key
	I0626 20:53:27.259034   46683 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 20:53:27.259106   46683 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 20:53:27.259170   46683 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 20:53:27.259247   46683 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 20:53:27.259325   46683 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 20:53:27.259410   46683 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 20:53:27.260969   46683 out.go:204]   - Booting up control plane ...
	I0626 20:53:27.261074   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 20:53:27.261181   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 20:53:27.261257   46683 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 20:53:27.261341   46683 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 20:53:27.261496   46683 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 20:53:27.261599   46683 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.003012 seconds
	I0626 20:53:27.261709   46683 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 20:53:27.261854   46683 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 20:53:27.261940   46683 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 20:53:27.262112   46683 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-490377 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 20:53:27.262210   46683 kubeadm.go:322] [bootstrap-token] Using token: 9pdj92.0ssfpvr0ns0ww3t3
	I0626 20:53:27.263670   46683 out.go:204]   - Configuring RBAC rules ...
	I0626 20:53:27.263769   46683 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 20:53:27.263903   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 20:53:27.264029   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 20:53:27.264172   46683 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 20:53:27.264278   46683 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 20:53:27.264333   46683 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 20:53:27.264372   46683 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 20:53:27.264379   46683 kubeadm.go:322] 
	I0626 20:53:27.264445   46683 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 20:53:27.264454   46683 kubeadm.go:322] 
	I0626 20:53:27.264557   46683 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 20:53:27.264568   46683 kubeadm.go:322] 
	I0626 20:53:27.264598   46683 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 20:53:27.264668   46683 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 20:53:27.264715   46683 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 20:53:27.264721   46683 kubeadm.go:322] 
	I0626 20:53:27.264769   46683 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 20:53:27.264846   46683 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 20:53:27.264934   46683 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 20:53:27.264943   46683 kubeadm.go:322] 
	I0626 20:53:27.265038   46683 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0626 20:53:27.265101   46683 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 20:53:27.265107   46683 kubeadm.go:322] 
	I0626 20:53:27.265171   46683 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265269   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 \
	I0626 20:53:27.265292   46683 kubeadm.go:322]     --control-plane 	  
	I0626 20:53:27.265298   46683 kubeadm.go:322] 
	I0626 20:53:27.265439   46683 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 20:53:27.265451   46683 kubeadm.go:322] 
	I0626 20:53:27.265581   46683 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9pdj92.0ssfpvr0ns0ww3t3 \
	I0626 20:53:27.265739   46683 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7411e783c808252a5f0f147c008fd91b3a33275bb9fd0c528e94d54ccd558848 
	I0626 20:53:27.265753   46683 cni.go:84] Creating CNI manager for ""
	I0626 20:53:27.265765   46683 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 20:53:27.267293   46683 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0626 20:53:24.515093   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.014403   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.401630   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:29.404203   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:27.268439   46683 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0626 20:53:27.281135   46683 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0626 20:53:27.304145   46683 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 20:53:27.304275   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=old-k8s-version-490377 minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.304277   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.555789   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:27.571040   46683 ops.go:34] apiserver oom_adj: -16
	I0626 20:53:28.180843   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:28.681089   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.180441   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.680355   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.180860   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:30.680971   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.181088   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:31.680352   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:29.516069   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.013135   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.013391   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:31.901777   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:34.400314   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:36.400967   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:32.180338   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:32.680389   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.180568   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:33.681010   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.180575   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:34.680905   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.180640   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:35.680412   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.181081   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.680836   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:36.514263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:39.013193   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:38.900309   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:40.900622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:37.181178   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:37.680710   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.180280   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:38.680304   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:39.681177   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.180431   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:40.681031   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.180847   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:41.681058   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.181122   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.680883   46683 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 20:53:42.800538   46683 kubeadm.go:1081] duration metric: took 15.496322508s to wait for elevateKubeSystemPrivileges.
	I0626 20:53:42.800568   46683 kubeadm.go:406] StartCluster complete in 5m56.189450192s
	I0626 20:53:42.800584   46683 settings.go:142] acquiring lock: {Name:mk60cdb20846591a32874a55d882187607a1e0d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.800661   46683 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:53:42.802530   46683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/kubeconfig: {Name:mkdc177d7754cb3698db3654a9e618a44a03246b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 20:53:42.802755   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 20:53:42.802810   46683 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 20:53:42.802908   46683 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802926   46683 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-490377"
	W0626 20:53:42.802936   46683 addons.go:237] addon storage-provisioner should already be in state true
	I0626 20:53:42.802934   46683 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-490377"
	I0626 20:53:42.802953   46683 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-490377"
	I0626 20:53:42.802972   46683 config.go:182] Loaded profile config "old-k8s-version-490377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:53:42.802983   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.802974   46683 addons.go:66] Setting metrics-server=true in profile "old-k8s-version-490377"
	I0626 20:53:42.803034   46683 addons.go:228] Setting addon metrics-server=true in "old-k8s-version-490377"
	W0626 20:53:42.803048   46683 addons.go:237] addon metrics-server should already be in state true
	I0626 20:53:42.803155   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.803353   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803394   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803437   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803468   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.803563   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.803607   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.822676   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0626 20:53:42.822891   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0626 20:53:42.823127   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823221   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0626 20:53:42.823284   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823599   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.823763   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823771   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.823783   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.823790   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824056   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.824082   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.824096   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824141   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824310   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.824408   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.824656   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824682   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.824924   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.824954   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.839635   46683 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-490377"
	W0626 20:53:42.839655   46683 addons.go:237] addon default-storageclass should already be in state true
	I0626 20:53:42.839675   46683 host.go:66] Checking if "old-k8s-version-490377" exists ...
	I0626 20:53:42.840131   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.840171   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.846479   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0626 20:53:42.847180   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.847711   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.847728   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.848194   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.848454   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.848519   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40781
	I0626 20:53:42.850321   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.850427   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.852331   46683 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0626 20:53:42.851252   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.853522   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.853581   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 20:53:42.853603   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 20:53:42.853625   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.854082   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.854292   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.856641   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.858158   46683 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 20:53:42.857809   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.859467   46683 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:42.859485   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 20:53:42.859500   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.859505   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.859528   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.858223   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.858466   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0626 20:53:42.860179   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.860331   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.860421   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.860783   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.860909   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.860923   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.861642   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.862199   46683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:53:42.862246   46683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:53:42.863700   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864103   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.864124   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.864413   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.864598   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.864737   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.864867   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.878470   46683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0626 20:53:42.878961   46683 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:53:42.879500   46683 main.go:141] libmachine: Using API Version  1
	I0626 20:53:42.879510   46683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:53:42.879860   46683 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:53:42.880063   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetState
	I0626 20:53:42.881757   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .DriverName
	I0626 20:53:42.882028   46683 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:42.882040   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 20:53:42.882054   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHHostname
	I0626 20:53:42.887689   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHPort
	I0626 20:53:42.887749   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887765   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:27:8f", ip: ""} in network mk-old-k8s-version-490377: {Iface:virbr3 ExpiryTime:2023-06-26 21:47:27 +0000 UTC Type:0 Mac:52:54:00:cc:27:8f Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:old-k8s-version-490377 Clientid:01:52:54:00:cc:27:8f}
	I0626 20:53:42.887779   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | domain old-k8s-version-490377 has defined IP address 192.168.72.111 and MAC address 52:54:00:cc:27:8f in network mk-old-k8s-version-490377
	I0626 20:53:42.887888   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHKeyPath
	I0626 20:53:42.888058   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .GetSSHUsername
	I0626 20:53:42.888203   46683 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/old-k8s-version-490377/id_rsa Username:docker}
	I0626 20:53:42.981495   46683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 20:53:43.064530   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 20:53:43.064554   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0626 20:53:43.074105   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 20:53:43.091876   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 20:53:43.132074   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 20:53:43.132095   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 20:53:43.219103   46683 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.219133   46683 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 20:53:43.285081   46683 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 20:53:43.443796   46683 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-490377" context rescaled to 1 replicas
	I0626 20:53:43.443841   46683 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 20:53:43.445639   46683 out.go:177] * Verifying Kubernetes components...
	I0626 20:53:41.014279   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.515278   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:43.447458   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:53:43.642242   46683 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0626 20:53:44.194901   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.102988033s)
	I0626 20:53:44.194990   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195008   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.194932   46683 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120793889s)
	I0626 20:53:44.195085   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195096   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195425   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195452   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195466   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195475   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195486   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195493   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195518   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195529   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195540   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.195714   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195765   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195774   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195816   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.195893   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.195905   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.195922   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.195936   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.196171   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.196190   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.196197   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.260680   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.260703   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.260706   46683 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.261103   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261122   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261134   46683 main.go:141] libmachine: Making call to close driver server
	I0626 20:53:44.261144   46683 main.go:141] libmachine: (old-k8s-version-490377) Calling .Close
	I0626 20:53:44.261146   46683 main.go:141] libmachine: (old-k8s-version-490377) DBG | Closing plugin on server side
	I0626 20:53:44.261364   46683 main.go:141] libmachine: Successfully made call to close driver server
	I0626 20:53:44.261386   46683 main.go:141] libmachine: Making call to close connection to plugin binary
	I0626 20:53:44.261396   46683 addons.go:464] Verifying addon metrics-server=true in "old-k8s-version-490377"
	I0626 20:53:44.262936   46683 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0626 20:53:42.901604   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.902182   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:44.264049   46683 addons.go:499] enable addons completed in 1.461244367s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0626 20:53:44.318103   46683 node_ready.go:49] node "old-k8s-version-490377" has status "Ready":"True"
	I0626 20:53:44.318135   46683 node_ready.go:38] duration metric: took 57.40895ms waiting for node "old-k8s-version-490377" to be "Ready" ...
	I0626 20:53:44.318147   46683 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:44.333409   46683 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:46.345926   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:46.015128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.516066   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:47.400802   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:49.901066   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:48.347529   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:50.847639   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:51.012404   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.012697   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:52.400326   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:54.400932   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.402434   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:53.345907   46683 pod_ready.go:102] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:55.345824   46683 pod_ready.go:92] pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.345850   46683 pod_ready.go:81] duration metric: took 11.012408828s waiting for pod "coredns-5644d7b6d9-k6lww" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.345858   46683 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350198   46683 pod_ready.go:92] pod "kube-proxy-m7hz7" in "kube-system" namespace has status "Ready":"True"
	I0626 20:53:55.350214   46683 pod_ready.go:81] duration metric: took 4.351274ms waiting for pod "kube-proxy-m7hz7" in "kube-system" namespace to be "Ready" ...
	I0626 20:53:55.350222   46683 pod_ready.go:38] duration metric: took 11.032065043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:53:55.350236   46683 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:53:55.350285   46683 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:53:55.366478   46683 api_server.go:72] duration metric: took 11.922600619s to wait for apiserver process to appear ...
	I0626 20:53:55.366501   46683 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:53:55.366518   46683 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0626 20:53:55.373257   46683 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0626 20:53:55.374362   46683 api_server.go:141] control plane version: v1.16.0
	I0626 20:53:55.374382   46683 api_server.go:131] duration metric: took 7.874169ms to wait for apiserver health ...
	I0626 20:53:55.374390   46683 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:53:55.377704   46683 system_pods.go:59] 4 kube-system pods found
	I0626 20:53:55.377719   46683 system_pods.go:61] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.377724   46683 system_pods.go:61] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.377744   46683 system_pods.go:61] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.377754   46683 system_pods.go:61] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.377759   46683 system_pods.go:74] duration metric: took 3.35753ms to wait for pod list to return data ...
	I0626 20:53:55.377765   46683 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:53:55.379628   46683 default_sa.go:45] found service account: "default"
	I0626 20:53:55.379641   46683 default_sa.go:55] duration metric: took 1.87263ms for default service account to be created ...
	I0626 20:53:55.379647   46683 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:53:55.382155   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.382171   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.382176   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.382183   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.382189   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.382204   46683 retry.go:31] will retry after 310.903974ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.698587   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:55.698613   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:55.698618   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:55.698625   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:55.698631   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:55.698646   46683 retry.go:31] will retry after 300.100433ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.005356   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.005397   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.005408   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.005419   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.005427   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.005446   46683 retry.go:31] will retry after 407.352435ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:56.417879   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.417905   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.417910   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.417916   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.417922   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.417935   46683 retry.go:31] will retry after 483.508514ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:55.013247   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:57.015631   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:58.900650   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.401491   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:53:56.906260   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:56.906282   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:56.906287   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:56.906293   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:56.906301   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:56.906319   46683 retry.go:31] will retry after 527.167542ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:57.438949   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:57.438985   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:57.438995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:57.439006   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:57.439019   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:57.439038   46683 retry.go:31] will retry after 902.255612ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:58.346131   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:58.346161   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:58.346166   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:58.346173   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:58.346179   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:58.346192   46683 retry.go:31] will retry after 904.271086ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.256458   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:53:59.256489   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:53:59.256497   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:53:59.256509   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:53:59.256517   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:53:59.256534   46683 retry.go:31] will retry after 1.069634228s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:00.331828   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:00.331858   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:00.331865   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:00.331873   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:00.331879   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:00.331896   46683 retry.go:31] will retry after 1.418598639s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:01.755104   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:01.755131   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:01.755136   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:01.755143   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:01.755149   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:01.755162   46683 retry.go:31] will retry after 1.624135654s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:53:59.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:01.514847   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.515086   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.900425   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:05.900854   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:03.385085   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:03.385111   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:03.385116   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:03.385122   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:03.385128   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:03.385142   46683 retry.go:31] will retry after 1.861818901s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:05.251844   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:05.251870   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:05.251875   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:05.251882   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:05.251888   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:05.251901   46683 retry.go:31] will retry after 3.23679019s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:06.013786   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.514493   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.399542   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:10.400928   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:08.494644   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:08.494669   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:08.494674   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:08.494681   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:08.494687   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:08.494700   46683 retry.go:31] will retry after 4.210335189s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:10.514707   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.515079   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.415273   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:14.899807   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:12.709730   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:12.709754   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:12.709759   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:12.709765   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:12.709771   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:12.709785   46683 retry.go:31] will retry after 4.208864521s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:15.012766   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:17.012807   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.014851   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.901107   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:19.400540   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:21.402204   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:16.923625   46683 system_pods.go:86] 4 kube-system pods found
	I0626 20:54:16.923654   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:16.923662   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:16.923673   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:16.923682   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:16.923701   46683 retry.go:31] will retry after 6.417296046s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:21.514829   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.515117   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.402546   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:25.903195   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:23.347074   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:23.347099   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:23.347105   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Pending
	I0626 20:54:23.347108   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:23.347115   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:23.347121   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:23.347133   46683 retry.go:31] will retry after 7.108155838s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0626 20:54:26.013263   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.013708   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:28.399697   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.401036   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:30.460927   46683 system_pods.go:86] 5 kube-system pods found
	I0626 20:54:30.460950   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:30.460955   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:30.460995   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:30.461004   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:30.461014   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:30.461027   46683 retry.go:31] will retry after 9.756193162s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:30.514139   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.514334   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:32.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:34.901064   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:35.013362   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.013815   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.014126   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:37.400945   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:39.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:40.223985   46683 system_pods.go:86] 7 kube-system pods found
	I0626 20:54:40.224009   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:40.224014   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Pending
	I0626 20:54:40.224018   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Pending
	I0626 20:54:40.224022   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:40.224026   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:40.224032   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:40.224037   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:40.224052   46683 retry.go:31] will retry after 8.963386657s: missing components: etcd, kube-apiserver, kube-scheduler
	I0626 20:54:41.515388   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:44.015053   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:41.900424   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:43.901263   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.400098   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:46.514128   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.013743   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:49.195390   46683 system_pods.go:86] 8 kube-system pods found
	I0626 20:54:49.195416   46683 system_pods.go:89] "coredns-5644d7b6d9-k6lww" [b447152e-e5ad-4a16-a2fa-e1283dd98e1b] Running
	I0626 20:54:49.195421   46683 system_pods.go:89] "etcd-old-k8s-version-490377" [5a6e4c4d-0b61-40af-ba9c-159c8a0323f0] Running
	I0626 20:54:49.195426   46683 system_pods.go:89] "kube-apiserver-old-k8s-version-490377" [34da9659-3b5b-4e4a-aa66-ac0ad7578d6a] Running
	I0626 20:54:49.195430   46683 system_pods.go:89] "kube-controller-manager-old-k8s-version-490377" [9fc0ab20-e05b-4bec-a791-d9f7b66e04a6] Running
	I0626 20:54:49.195434   46683 system_pods.go:89] "kube-proxy-m7hz7" [265fb314-5fe1-4cc2-bc03-79ec432d1a46] Running
	I0626 20:54:49.195438   46683 system_pods.go:89] "kube-scheduler-old-k8s-version-490377" [c6fe04b8-d037-452b-bf41-3719c032b7ef] Running
	I0626 20:54:49.195444   46683 system_pods.go:89] "metrics-server-74d5856cc6-bvbnj" [a51799c8-5cb6-42eb-85f0-508d0303445f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:54:49.195450   46683 system_pods.go:89] "storage-provisioner" [c17bf508-5125-4aa3-b48f-3ec6700ef03b] Running
	I0626 20:54:49.195458   46683 system_pods.go:126] duration metric: took 53.81580645s to wait for k8s-apps to be running ...
	I0626 20:54:49.195466   46683 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:54:49.195518   46683 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:54:49.219014   46683 system_svc.go:56] duration metric: took 23.534309ms WaitForService to wait for kubelet.
	I0626 20:54:49.219049   46683 kubeadm.go:581] duration metric: took 1m5.775176119s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:54:49.219089   46683 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:54:49.223397   46683 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:54:49.223426   46683 node_conditions.go:123] node cpu capacity is 2
	I0626 20:54:49.223438   46683 node_conditions.go:105] duration metric: took 4.339435ms to run NodePressure ...
	I0626 20:54:49.223452   46683 start.go:228] waiting for startup goroutines ...
	I0626 20:54:49.223461   46683 start.go:233] waiting for cluster config update ...
	I0626 20:54:49.223472   46683 start.go:242] writing updated cluster config ...
	I0626 20:54:49.223798   46683 ssh_runner.go:195] Run: rm -f paused
	I0626 20:54:49.277613   46683 start.go:652] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0626 20:54:49.279501   46683 out.go:177] 
	W0626 20:54:49.280841   46683 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0626 20:54:49.282249   46683 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0626 20:54:49.283695   46683 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-490377" cluster and "default" namespace by default
	I0626 20:54:48.401602   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:50.900375   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:51.514071   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.013330   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:52.900782   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:54.900946   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.013501   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:58.014748   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:56.901531   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:54:59.401822   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:00.016725   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:02.514316   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:01.902698   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:04.400011   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:06.402149   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:05.014536   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:07.514975   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:08.900297   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.900463   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:10.013780   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:12.514823   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:13.399907   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.400044   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:15.014032   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.515161   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:17.907245   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.400962   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:20.015074   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.514465   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:22.403366   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.900247   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:24.514993   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.012592   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.013612   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:27.400192   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:29.401917   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.402240   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:31.015647   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.513844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:33.900187   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.902063   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:35.514657   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:37.514888   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:38.400753   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.902398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:40.014755   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:42.514599   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:43.401280   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:45.902265   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:44.521736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.016422   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:47.902334   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:50.400765   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:49.515570   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.014736   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:52.900293   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.900572   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:54.514047   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.013346   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.013409   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:57.400170   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:55:59.401528   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.013946   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:03.014845   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:01.902597   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:04.401919   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:05.514639   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:08.016797   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:06.901493   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:09.400229   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:11.401398   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:10.513478   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:12.514938   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:13.403138   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.901738   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:15.013852   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:17.514150   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:18.400812   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.401025   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:20.013522   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.015651   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.016747   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:22.401212   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:24.401675   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.515343   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:28.515706   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:26.902301   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:29.401779   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.012844   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:33.013826   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:31.901622   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.403688   47309 pod_ready.go:102] pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:34.993256   47309 pod_ready.go:81] duration metric: took 4m0.000204736s waiting for pod "metrics-server-74d5c6b9c-4dkpm" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:34.993309   47309 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:34.993324   47309 pod_ready.go:38] duration metric: took 4m11.355630262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:34.993352   47309 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:34.993410   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:34.993484   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:35.038316   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.038342   47309 cri.go:89] found id: ""
	I0626 20:56:35.038352   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:35.038414   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.042851   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:35.042914   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:35.076892   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.076925   47309 cri.go:89] found id: ""
	I0626 20:56:35.076934   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:35.076990   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.081850   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:35.081933   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:35.119872   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.119896   47309 cri.go:89] found id: ""
	I0626 20:56:35.119904   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:35.119971   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.124661   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:35.124731   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:35.158899   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.158924   47309 cri.go:89] found id: ""
	I0626 20:56:35.158933   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:35.158991   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.163512   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:35.163587   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:35.195698   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.195721   47309 cri.go:89] found id: ""
	I0626 20:56:35.195729   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:35.195786   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.199883   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:35.199935   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:35.243909   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.243932   47309 cri.go:89] found id: ""
	I0626 20:56:35.243939   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:35.243992   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.248331   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:35.248388   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:35.287985   47309 cri.go:89] found id: ""
	I0626 20:56:35.288009   47309 logs.go:284] 0 containers: []
	W0626 20:56:35.288019   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:35.288026   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:35.288085   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:35.324050   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.324129   47309 cri.go:89] found id: ""
	I0626 20:56:35.324151   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:35.324219   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:35.328564   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:35.328588   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:35.369968   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:35.369997   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:35.391588   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:35.391615   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:35.542328   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:35.542356   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:35.579140   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:35.579172   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:35.635428   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:35.635463   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:35.674715   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:35.674750   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:35.732788   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:35.732837   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:35.774860   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:35.774901   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:35.881082   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:35.881118   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:35.929445   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:35.929478   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:35.968723   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:35.968754   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:35.015798   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.514548   47605 pod_ready.go:102] pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace has status "Ready":"False"
	I0626 20:56:37.606375   47605 pod_ready.go:81] duration metric: took 4m0.000950536s waiting for pod "metrics-server-74d5c6b9c-vkggw" in "kube-system" namespace to be "Ready" ...
	E0626 20:56:37.606403   47605 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0626 20:56:37.606412   47605 pod_ready.go:38] duration metric: took 4m2.78027212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 20:56:37.606429   47605 api_server.go:52] waiting for apiserver process to appear ...
	I0626 20:56:37.606459   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:37.606521   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:37.668350   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:37.668383   47605 cri.go:89] found id: ""
	I0626 20:56:37.668391   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:37.668453   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.675583   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:37.675669   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:37.710826   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:37.710852   47605 cri.go:89] found id: ""
	I0626 20:56:37.710860   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:37.710916   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.715610   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:37.715671   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:37.751709   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:37.751784   47605 cri.go:89] found id: ""
	I0626 20:56:37.751812   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:37.751877   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.757177   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:37.757241   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:37.790384   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:37.790413   47605 cri.go:89] found id: ""
	I0626 20:56:37.790420   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:37.790468   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.795294   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:37.795352   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:37.832125   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:37.832157   47605 cri.go:89] found id: ""
	I0626 20:56:37.832168   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:37.832239   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.836762   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:37.836816   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:37.877789   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:37.877817   47605 cri.go:89] found id: ""
	I0626 20:56:37.877827   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:37.877887   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.885276   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:37.885348   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:37.929701   47605 cri.go:89] found id: ""
	I0626 20:56:37.929731   47605 logs.go:284] 0 containers: []
	W0626 20:56:37.929745   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:37.929755   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:37.929815   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:37.970177   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:37.970201   47605 cri.go:89] found id: ""
	I0626 20:56:37.970211   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:37.970270   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:37.975002   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:37.975025   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:38.022831   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:38.022862   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:38.058414   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:38.058446   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:38.168689   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:38.168726   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:38.183930   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:38.183959   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:38.224623   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:38.224653   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:38.271164   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:38.271205   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:38.308365   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:38.308391   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:38.363321   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:38.363356   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:38.510275   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:38.510310   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:38.552512   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:38.552544   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:38.586122   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:38.586155   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:38.945144   47309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:38.962999   47309 api_server.go:72] duration metric: took 4m18.467522928s to wait for apiserver process to appear ...
	I0626 20:56:38.963026   47309 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:38.963067   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:38.963129   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:39.002109   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.002133   47309 cri.go:89] found id: ""
	I0626 20:56:39.002141   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:39.002198   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.006799   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:39.006864   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:39.042531   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:39.042556   47309 cri.go:89] found id: ""
	I0626 20:56:39.042566   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:39.042621   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.047228   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:39.047301   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:39.080810   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.080842   47309 cri.go:89] found id: ""
	I0626 20:56:39.080850   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:39.080916   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.085173   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:39.085238   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:39.116857   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:39.116886   47309 cri.go:89] found id: ""
	I0626 20:56:39.116895   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:39.116946   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.121912   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:39.122007   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:39.166886   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.166912   47309 cri.go:89] found id: ""
	I0626 20:56:39.166920   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:39.166972   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.171344   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:39.171420   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:39.205333   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:39.205358   47309 cri.go:89] found id: ""
	I0626 20:56:39.205366   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:39.205445   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.211414   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:39.211491   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:39.249068   47309 cri.go:89] found id: ""
	I0626 20:56:39.249092   47309 logs.go:284] 0 containers: []
	W0626 20:56:39.249103   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:39.249110   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:39.249171   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:39.283295   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.283314   47309 cri.go:89] found id: ""
	I0626 20:56:39.283325   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:39.283372   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:39.287514   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:39.287537   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:39.420720   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:39.420752   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:39.479018   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:39.479052   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:39.512285   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:39.512313   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:39.549886   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:39.549922   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:39.590619   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:39.590647   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:40.076597   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:40.076642   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:40.092551   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:40.092581   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:40.135655   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:40.135699   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:40.184590   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:40.184628   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:40.238354   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:40.238393   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:40.283033   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:40.283075   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:41.567686   47605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:56:41.584431   47605 api_server.go:72] duration metric: took 4m9.528462616s to wait for apiserver process to appear ...
	I0626 20:56:41.584462   47605 api_server.go:88] waiting for apiserver healthz status ...
	I0626 20:56:41.584492   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:41.584553   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:41.622027   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:41.622051   47605 cri.go:89] found id: ""
	I0626 20:56:41.622061   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:41.622119   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.626209   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:41.626271   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:41.658658   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:41.658680   47605 cri.go:89] found id: ""
	I0626 20:56:41.658689   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:41.658746   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.666357   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:41.666437   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:41.702344   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:41.702369   47605 cri.go:89] found id: ""
	I0626 20:56:41.702378   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:41.702443   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.706706   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:41.706775   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:41.743534   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:41.743554   47605 cri.go:89] found id: ""
	I0626 20:56:41.743561   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:41.743619   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.748338   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:41.748408   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:41.780299   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:41.780324   47605 cri.go:89] found id: ""
	I0626 20:56:41.780333   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:41.780392   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.785308   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:41.785395   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:41.819335   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:41.819361   47605 cri.go:89] found id: ""
	I0626 20:56:41.819370   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:41.819415   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.823767   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:41.823832   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:41.855049   47605 cri.go:89] found id: ""
	I0626 20:56:41.855079   47605 logs.go:284] 0 containers: []
	W0626 20:56:41.855088   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:41.855094   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:41.855147   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:41.886378   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:41.886400   47605 cri.go:89] found id: ""
	I0626 20:56:41.886408   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:41.886459   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:41.891748   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:41.891777   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:42.003933   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:42.003968   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:42.018182   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:42.018230   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:42.145038   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:42.145074   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:42.181403   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:42.181438   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:42.224428   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:42.224467   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:42.260067   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:42.260097   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:42.312924   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:42.312972   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:42.347173   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:42.347203   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:42.920689   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:42.920725   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:42.970428   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:42.970456   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:43.021561   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.021587   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:42.886551   47309 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0626 20:56:42.892462   47309 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0626 20:56:42.894253   47309 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:42.894277   47309 api_server.go:131] duration metric: took 3.931242905s to wait for apiserver health ...
	I0626 20:56:42.894286   47309 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:42.894309   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:42.894364   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:42.931699   47309 cri.go:89] found id: "677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:42.931728   47309 cri.go:89] found id: ""
	I0626 20:56:42.931736   47309 logs.go:284] 1 containers: [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937]
	I0626 20:56:42.931792   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.936873   47309 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:42.936944   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:42.968701   47309 cri.go:89] found id: "d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:42.968720   47309 cri.go:89] found id: ""
	I0626 20:56:42.968727   47309 logs.go:284] 1 containers: [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2]
	I0626 20:56:42.968778   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:42.974309   47309 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:42.974381   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:43.010388   47309 cri.go:89] found id: "3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:43.010416   47309 cri.go:89] found id: ""
	I0626 20:56:43.010425   47309 logs.go:284] 1 containers: [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf]
	I0626 20:56:43.010482   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.015524   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:43.015582   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:43.049074   47309 cri.go:89] found id: "4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.049103   47309 cri.go:89] found id: ""
	I0626 20:56:43.049112   47309 logs.go:284] 1 containers: [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1]
	I0626 20:56:43.049173   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.053750   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:43.053814   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:43.096699   47309 cri.go:89] found id: "d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:43.096727   47309 cri.go:89] found id: ""
	I0626 20:56:43.096734   47309 logs.go:284] 1 containers: [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b]
	I0626 20:56:43.096776   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.101210   47309 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:43.101264   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:43.133316   47309 cri.go:89] found id: "9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:43.133344   47309 cri.go:89] found id: ""
	I0626 20:56:43.133354   47309 logs.go:284] 1 containers: [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b]
	I0626 20:56:43.133420   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.138226   47309 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:43.138289   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:43.169863   47309 cri.go:89] found id: ""
	I0626 20:56:43.169896   47309 logs.go:284] 0 containers: []
	W0626 20:56:43.169903   47309 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:43.169908   47309 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:43.169962   47309 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:43.201859   47309 cri.go:89] found id: "cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.201884   47309 cri.go:89] found id: ""
	I0626 20:56:43.201892   47309 logs.go:284] 1 containers: [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec]
	I0626 20:56:43.201942   47309 ssh_runner.go:195] Run: which crictl
	I0626 20:56:43.207043   47309 logs.go:123] Gathering logs for kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] ...
	I0626 20:56:43.207072   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1"
	I0626 20:56:43.264723   47309 logs.go:123] Gathering logs for storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] ...
	I0626 20:56:43.264755   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec"
	I0626 20:56:43.301988   47309 logs.go:123] Gathering logs for container status ...
	I0626 20:56:43.302016   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:43.344103   47309 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:43.344132   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:43.357414   47309 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:43.357445   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:43.486425   47309 logs.go:123] Gathering logs for kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] ...
	I0626 20:56:43.486453   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937"
	I0626 20:56:43.529205   47309 logs.go:123] Gathering logs for etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] ...
	I0626 20:56:43.529239   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2"
	I0626 20:56:43.575311   47309 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:43.575344   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:44.074749   47309 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:44.074790   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:44.184946   47309 logs.go:123] Gathering logs for coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] ...
	I0626 20:56:44.184987   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf"
	I0626 20:56:44.221993   47309 logs.go:123] Gathering logs for kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] ...
	I0626 20:56:44.222028   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b"
	I0626 20:56:44.263095   47309 logs.go:123] Gathering logs for kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] ...
	I0626 20:56:44.263127   47309 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b"
	I0626 20:56:46.817987   47309 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:46.818014   47309 system_pods.go:61] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.818019   47309 system_pods.go:61] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.818023   47309 system_pods.go:61] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.818027   47309 system_pods.go:61] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.818031   47309 system_pods.go:61] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.818035   47309 system_pods.go:61] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.818041   47309 system_pods.go:61] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.818047   47309 system_pods.go:61] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.818052   47309 system_pods.go:74] duration metric: took 3.923762125s to wait for pod list to return data ...
	I0626 20:56:46.818061   47309 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:46.821789   47309 default_sa.go:45] found service account: "default"
	I0626 20:56:46.821811   47309 default_sa.go:55] duration metric: took 3.746079ms for default service account to be created ...
	I0626 20:56:46.821818   47309 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:46.830080   47309 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:46.830117   47309 system_pods.go:89] "coredns-5d78c9869d-xm96k" [ac95f06b-2ed5-4979-9282-f33eaa18dc7f] Running
	I0626 20:56:46.830127   47309 system_pods.go:89] "etcd-no-preload-934450" [326e3bf5-8e93-47c1-b5c9-21b1888380b8] Running
	I0626 20:56:46.830134   47309 system_pods.go:89] "kube-apiserver-no-preload-934450" [4ee787d8-730e-4eae-8f33-9d7702c5465c] Running
	I0626 20:56:46.830141   47309 system_pods.go:89] "kube-controller-manager-no-preload-934450" [e4fa60bf-745e-4209-9415-8c96cdb609ee] Running
	I0626 20:56:46.830147   47309 system_pods.go:89] "kube-proxy-jhz99" [f79864b8-d96c-4d24-b6e4-a402081ad34a] Running
	I0626 20:56:46.830153   47309 system_pods.go:89] "kube-scheduler-no-preload-934450" [a0a0d216-015c-480d-af32-75e7bdf8ee31] Running
	I0626 20:56:46.830165   47309 system_pods.go:89] "metrics-server-74d5c6b9c-4dkpm" [2a86e50e-ef2a-442a-908f-d01b2292f977] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:46.830178   47309 system_pods.go:89] "storage-provisioner" [add6b7bd-e1b5-4520-a7e6-cf999357c2be] Running
	I0626 20:56:46.830186   47309 system_pods.go:126] duration metric: took 8.363064ms to wait for k8s-apps to be running ...
	I0626 20:56:46.830198   47309 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:46.830250   47309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:46.851429   47309 system_svc.go:56] duration metric: took 21.223321ms WaitForService to wait for kubelet.
	I0626 20:56:46.851456   47309 kubeadm.go:581] duration metric: took 4m26.355992846s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:46.851482   47309 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:46.856152   47309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:46.856177   47309 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:46.856187   47309 node_conditions.go:105] duration metric: took 4.700595ms to run NodePressure ...
	I0626 20:56:46.856197   47309 start.go:228] waiting for startup goroutines ...
	I0626 20:56:46.856203   47309 start.go:233] waiting for cluster config update ...
	I0626 20:56:46.856212   47309 start.go:242] writing updated cluster config ...
	I0626 20:56:46.856472   47309 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:46.911414   47309 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:46.913280   47309 out.go:177] * Done! kubectl is now configured to use "no-preload-934450" cluster and "default" namespace by default
	I0626 20:56:45.561459   47605 api_server.go:253] Checking apiserver healthz at https://192.168.39.51:8443/healthz ...
	I0626 20:56:45.567555   47605 api_server.go:279] https://192.168.39.51:8443/healthz returned 200:
	ok
	I0626 20:56:45.568704   47605 api_server.go:141] control plane version: v1.27.3
	I0626 20:56:45.568720   47605 api_server.go:131] duration metric: took 3.984252941s to wait for apiserver health ...
	I0626 20:56:45.568728   47605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 20:56:45.568745   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0626 20:56:45.568789   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0626 20:56:45.608235   47605 cri.go:89] found id: "8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:45.608261   47605 cri.go:89] found id: ""
	I0626 20:56:45.608270   47605 logs.go:284] 1 containers: [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58]
	I0626 20:56:45.608335   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.612705   47605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0626 20:56:45.612774   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0626 20:56:45.649330   47605 cri.go:89] found id: "e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.649353   47605 cri.go:89] found id: ""
	I0626 20:56:45.649362   47605 logs.go:284] 1 containers: [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84]
	I0626 20:56:45.649440   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.655104   47605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0626 20:56:45.655178   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0626 20:56:45.699690   47605 cri.go:89] found id: "f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.699711   47605 cri.go:89] found id: ""
	I0626 20:56:45.699722   47605 logs.go:284] 1 containers: [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222]
	I0626 20:56:45.699767   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.704455   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0626 20:56:45.704515   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0626 20:56:45.743181   47605 cri.go:89] found id: "c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:45.743209   47605 cri.go:89] found id: ""
	I0626 20:56:45.743218   47605 logs.go:284] 1 containers: [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8]
	I0626 20:56:45.743283   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.748030   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0626 20:56:45.748098   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0626 20:56:45.787325   47605 cri.go:89] found id: "3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:45.787352   47605 cri.go:89] found id: ""
	I0626 20:56:45.787360   47605 logs.go:284] 1 containers: [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848]
	I0626 20:56:45.787406   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.792119   47605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0626 20:56:45.792191   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0626 20:56:45.833192   47605 cri.go:89] found id: "e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:45.833215   47605 cri.go:89] found id: ""
	I0626 20:56:45.833222   47605 logs.go:284] 1 containers: [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30]
	I0626 20:56:45.833279   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.838399   47605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0626 20:56:45.838464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0626 20:56:45.878372   47605 cri.go:89] found id: ""
	I0626 20:56:45.878403   47605 logs.go:284] 0 containers: []
	W0626 20:56:45.878410   47605 logs.go:286] No container was found matching "kindnet"
	I0626 20:56:45.878415   47605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0626 20:56:45.878464   47605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0626 20:56:45.917051   47605 cri.go:89] found id: "f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:45.917074   47605 cri.go:89] found id: ""
	I0626 20:56:45.917081   47605 logs.go:284] 1 containers: [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6]
	I0626 20:56:45.917125   47605 ssh_runner.go:195] Run: which crictl
	I0626 20:56:45.921484   47605 logs.go:123] Gathering logs for etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] ...
	I0626 20:56:45.921508   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84"
	I0626 20:56:45.962659   47605 logs.go:123] Gathering logs for coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] ...
	I0626 20:56:45.962699   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222"
	I0626 20:56:45.993644   47605 logs.go:123] Gathering logs for kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] ...
	I0626 20:56:45.993674   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30"
	I0626 20:56:46.055087   47605 logs.go:123] Gathering logs for CRI-O ...
	I0626 20:56:46.055130   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0626 20:56:46.574535   47605 logs.go:123] Gathering logs for container status ...
	I0626 20:56:46.574581   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0626 20:56:46.617139   47605 logs.go:123] Gathering logs for kubelet ...
	I0626 20:56:46.617174   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0626 20:56:46.729727   47605 logs.go:123] Gathering logs for describe nodes ...
	I0626 20:56:46.729768   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0626 20:56:46.860871   47605 logs.go:123] Gathering logs for kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] ...
	I0626 20:56:46.860908   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58"
	I0626 20:56:46.922618   47605 logs.go:123] Gathering logs for kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] ...
	I0626 20:56:46.922657   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8"
	I0626 20:56:46.975973   47605 logs.go:123] Gathering logs for kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] ...
	I0626 20:56:46.976000   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848"
	I0626 20:56:47.017458   47605 logs.go:123] Gathering logs for storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] ...
	I0626 20:56:47.017488   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6"
	I0626 20:56:47.058540   47605 logs.go:123] Gathering logs for dmesg ...
	I0626 20:56:47.058567   47605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0626 20:56:49.582112   47605 system_pods.go:59] 8 kube-system pods found
	I0626 20:56:49.582139   47605 system_pods.go:61] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.582145   47605 system_pods.go:61] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.582149   47605 system_pods.go:61] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.582153   47605 system_pods.go:61] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.582157   47605 system_pods.go:61] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.582163   47605 system_pods.go:61] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.582169   47605 system_pods.go:61] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.582175   47605 system_pods.go:61] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.582180   47605 system_pods.go:74] duration metric: took 4.013448182s to wait for pod list to return data ...
	I0626 20:56:49.582187   47605 default_sa.go:34] waiting for default service account to be created ...
	I0626 20:56:49.588793   47605 default_sa.go:45] found service account: "default"
	I0626 20:56:49.588827   47605 default_sa.go:55] duration metric: took 6.634132ms for default service account to be created ...
	I0626 20:56:49.588836   47605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 20:56:49.596519   47605 system_pods.go:86] 8 kube-system pods found
	I0626 20:56:49.596549   47605 system_pods.go:89] "coredns-5d78c9869d-tl42z" [429d2f2e-a161-4353-8a29-1a4f8ddb4cc8] Running
	I0626 20:56:49.596555   47605 system_pods.go:89] "etcd-embed-certs-299839" [739398d0-0a30-4e16-8a78-df4b5293a149] Running
	I0626 20:56:49.596562   47605 system_pods.go:89] "kube-apiserver-embed-certs-299839" [22a0fe62-6804-45a5-8d97-f34ea8b44163] Running
	I0626 20:56:49.596570   47605 system_pods.go:89] "kube-controller-manager-embed-certs-299839" [54ed7958-329e-48c5-b1a8-ac19cc51c802] Running
	I0626 20:56:49.596577   47605 system_pods.go:89] "kube-proxy-scfwr" [60aed765-875d-4023-9ce9-97b5a6a47995] Running
	I0626 20:56:49.596585   47605 system_pods.go:89] "kube-scheduler-embed-certs-299839" [129716ad-2c9e-4d16-b578-eec1cfe2a8d7] Running
	I0626 20:56:49.596600   47605 system_pods.go:89] "metrics-server-74d5c6b9c-vkggw" [147679d1-7453-4e55-862c-fec18e08ba84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0626 20:56:49.596612   47605 system_pods.go:89] "storage-provisioner" [51730db4-00b6-4240-917c-fed87615fd6e] Running
	I0626 20:56:49.596622   47605 system_pods.go:126] duration metric: took 7.781697ms to wait for k8s-apps to be running ...
	I0626 20:56:49.596633   47605 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 20:56:49.596684   47605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:56:49.613188   47605 system_svc.go:56] duration metric: took 16.545322ms WaitForService to wait for kubelet.
	I0626 20:56:49.613212   47605 kubeadm.go:581] duration metric: took 4m17.557252465s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 20:56:49.613231   47605 node_conditions.go:102] verifying NodePressure condition ...
	I0626 20:56:49.616820   47605 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0626 20:56:49.616845   47605 node_conditions.go:123] node cpu capacity is 2
	I0626 20:56:49.616854   47605 node_conditions.go:105] duration metric: took 3.619443ms to run NodePressure ...
	I0626 20:56:49.616864   47605 start.go:228] waiting for startup goroutines ...
	I0626 20:56:49.616870   47605 start.go:233] waiting for cluster config update ...
	I0626 20:56:49.616878   47605 start.go:242] writing updated cluster config ...
	I0626 20:56:49.617126   47605 ssh_runner.go:195] Run: rm -f paused
	I0626 20:56:49.665468   47605 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 20:56:49.667447   47605 out.go:177] * Done! kubectl is now configured to use "embed-certs-299839" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:47:27 UTC, ends at Mon 2023-06-26 21:06:46 UTC. --
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.555927390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=df93795d-c6a5-4bbd-8170-094e05984bf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.556123466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=df93795d-c6a5-4bbd-8170-094e05984bf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.584155598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=50ed04c8-bdb8-4fd3-ae91-2314ef23dbba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.584277410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=50ed04c8-bdb8-4fd3-ae91-2314ef23dbba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.584522171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=50ed04c8-bdb8-4fd3-ae91-2314ef23dbba name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.585446781Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=152b893b-e2e0-4ce7-b7ed-6b386076f722 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.585614177Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812825574683587,StartedAt:1687812825662296388,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c17bf508-5125-4aa3-b48f-3ec6700ef03b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c17bf508-5125-4aa3-b48f-3ec6700ef03b/containers/storage-provisioner/de976276,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/c17bf508-5125-4aa3-b48f-3ec6700ef03b/volumes/kubernetes.io~secret/storage-provisioner-token-2msfc,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_storage-provisioner_c17bf508-5125-4aa3-b48f-3ec6700ef03b/storage-pr
ovisioner/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=152b893b-e2e0-4ce7-b7ed-6b386076f722 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.586360859Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=11f33bdc-1a16-4550-ae53-b448aaf372f2 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.586563667Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812824970250217,StartedAt:1687812825010600191,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/coredns:1.6.2,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/b447152e-e5ad-4a16-a2fa-e1283dd98e1b/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b447152e-e5ad-4a16-a2fa-e1283dd98e1b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b447152e-e5ad-4a16-a2fa-e1283dd98e1b/containers/coredns/9f92dae6,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b447152
e-e5ad-4a16-a2fa-e1283dd98e1b/volumes/kubernetes.io~secret/coredns-token-nzlrr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5644d7b6d9-k6lww_b447152e-e5ad-4a16-a2fa-e1283dd98e1b/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=11f33bdc-1a16-4550-ae53-b448aaf372f2 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.587285966Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=180e99bf-daaf-4b90-94c7-fdb746e9b627 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.587459376Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812824881125115,StartedAt:1687812824965592913,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-proxy:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/265fb314-5fe1-4cc2-bc03-79ec432d1a46/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/265fb314-5fe1-4cc2-bc03-79ec432d1a46/containers/kube-proxy/e25ff4e0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/265fb314-5fe1-4cc2-bc03-79ec432d1a46/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/servi
ceaccount,HostPath:/var/lib/kubelet/pods/265fb314-5fe1-4cc2-bc03-79ec432d1a46/volumes/kubernetes.io~secret/kube-proxy-token-z87kr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-m7hz7_265fb314-5fe1-4cc2-bc03-79ec432d1a46/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=180e99bf-daaf-4b90-94c7-fdb746e9b627 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.588096302Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=473452ec-5d53-4649-954e-b87bed65734b name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.588270433Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812798464139770,StartedAt:1687812798511477448,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/etcd:3.3.15-0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[string]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/2d16f4e4c3d338ac15a9bae60bef2daa/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/2d16f4e4c3d338ac15a9bae60bef2daa/containers/etcd/4ca3b706,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-old-k8s-version-490377_2d16f4e4c3d338ac15a9bae60bef2daa/etcd/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=473452ec-5d53-4649-954e-b87bed65734b name=/runtime.v1alpha2.RuntimeService/ContainerSt
atus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.589298671Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=6936fc45-5c64-43dd-9665-892c2fd17e72 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.589432137Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812797585394248,StartedAt:1687812797627207572,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-scheduler:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b3d303074fe0ca1d42a8bd9ed248df09/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b3d303074fe0ca1d42a8bd9ed248df09/containers/kube-scheduler/147f7df2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-old-k8s-version-490377_b3d303074fe0ca1d42a8bd9ed248df09/kube-scheduler/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=6936fc45-5c64-43dd-9665-892c2fd17e72 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.589985781Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=8795fa67-32ac-48c0-8ddd-6c530b2c24ce name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.590586101Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812797177718392,StartedAt:1687812797225651227,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-controller-manager:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7376ddb4f190a0ded9394063437bcb4e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7376ddb4f190a0ded9394063437bcb4e/containers/kube-controller-manager/26fdba7c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-490377_7376ddb4f190a0ded9394063437bcb4e/kube-controller-manager/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=8795fa67-32ac-48c0-8ddd-6c530b2c24ce name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.592273190Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=627a0c0c-cfa4-472e-8cdf-3626e247e853 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.592504279Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1687812797118519203,StartedAt:1687812797186994028,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-apiserver:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:map[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e703b2994e5bd1a9d98777f091e32ff6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e703b2994e5bd1a9d98777f091e32ff6/containers/kube-apiserver/e3a9038a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-old-k8s-version-490377_e703b29
94e5bd1a9d98777f091e32ff6/kube-apiserver/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=627a0c0c-cfa4-472e-8cdf-3626e247e853 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.606804539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81f8bdbf-2244-4da5-aabc-685bf717f05d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.606975335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=81f8bdbf-2244-4da5-aabc-685bf717f05d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.607225829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81f8bdbf-2244-4da5-aabc-685bf717f05d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.644513571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2a4f182-e1e0-4e07-bd2b-56aa2ba2b820 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.644634072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2a4f182-e1e0-4e07-bd2b-56aa2ba2b820 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:06:46 old-k8s-version-490377 crio[718]: time="2023-06-26 21:06:46.645000629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f,PodSandboxId:80814ed400554f6c5b7e1841b2cfbc08505c3803222c8567da1d23bbcc6ccb2a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812825409043185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c17bf508-5125-4aa3-b48f-3ec6700ef03b,},Annotations:map[string]string{io.kubernetes.container.hash: a3700da9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796,PodSandboxId:fc0f3f92592360e15eae13cff8501e4d7323272330a2ca28a712e83bfbd90b4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1687812824885167913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-k6lww,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b447152e-e5ad-4a16-a2fa-e1283dd98e1b,},Annotations:map[string]string{io.kubernetes.container.hash: 33d97290,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41,PodSandboxId:f74c1c90d9a549a6594bed35ce6ad1d5d3e7f41488c03a64516c7ce1f2c2f246,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1687812824565680340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7hz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 265fb
314-5fe1-4cc2-bc03-79ec432d1a46,},Annotations:map[string]string{io.kubernetes.container.hash: 68d69de3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598,PodSandboxId:b87c3356304de48a414b4183b4247071e35a2d0a5737b06f2a7aa7947d7756ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1687812798386109622,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d16f4e4c3d338ac15a9bae60bef2daa,},Annotations:map[s
tring]string{io.kubernetes.container.hash: fd37bd74,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae,PodSandboxId:92ce758b11fda23f5c677d139551912682ce612b25814b44416b5eef5ea661c0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1687812797512587893,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da,PodSandboxId:83db5f78d9adb614bebee733faf06bae6055948fd6d9aaceec688f8186289d6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1687812797077470259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365,PodSandboxId:7c09e40e4201dca1bbeadaf2a2f42991c13b3c837a9b0af472b16f6d5e33ac31,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1687812796978499334,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-490377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e703b2994e5bd1a9d98777f091e32ff6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: e363e056,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2a4f182-e1e0-4e07-bd2b-56aa2ba2b820 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	e4c63b2286876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   80814ed400554
	9211a896843b4       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   fc0f3f9259236
	974041d011ecf       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   f74c1c90d9a54
	909b122decd75       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   b87c3356304de
	eee0db517063a       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   92ce758b11fda
	d5bf95816703a       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   83db5f78d9adb
	59fe9451027f9       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   7c09e40e4201d
	
	* 
	* ==> coredns [9211a896843b44ba404d728a171ee027b882c68c042dc58baa87e623d4b96796] <==
	* .:53
	2023-06-26T20:53:45.144Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-06-26T20:53:45.144Z [INFO] CoreDNS-1.6.2
	2023-06-26T20:53:45.144Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-06-26T20:53:45.159Z [INFO] 127.0.0.1:40572 - 13622 "HINFO IN 2354216843956826877.8527488041721077620. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014357461s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-490377
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-490377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=old-k8s-version-490377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_53_27_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:53:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:06:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:06:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:06:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:06:22 +0000   Mon, 26 Jun 2023 20:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.111
	  Hostname:    old-k8s-version-490377
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ea232fb4ab5748478f4675b503f2e984
	 System UUID:                ea232fb4-ab57-4847-8f46-75b503f2e984
	 Boot ID:                    03b59918-dcfa-4a1b-ad64-21a28bdb7886
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-k6lww                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-490377                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-490377             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-490377    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-m7hz7                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-490377             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-bvbnj                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  Starting                 13m                kubelet, old-k8s-version-490377     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x9 over 13m)  kubelet, old-k8s-version-490377     Node old-k8s-version-490377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet, old-k8s-version-490377     Node old-k8s-version-490377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-490377     Node old-k8s-version-490377 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet, old-k8s-version-490377     Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-490377  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun26 20:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081220] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.643149] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.437214] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140249] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.487915] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.152043] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.116286] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.151754] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.112634] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.236084] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[ +19.298071] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
	[  +0.419702] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jun26 20:48] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.781800] kauditd_printk_skb: 2 callbacks suppressed
	[Jun26 20:53] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.663835] systemd-fstab-generator[3217]: Ignoring "noauto" for root device
	[ +40.461505] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [909b122decd75e095c00b6598f66ce42c17264794aba37b7b3aff6e5741c2598] <==
	* 2023-06-26 20:53:18.529408 I | raft: d9925a5c077e2b1a became follower at term 0
	2023-06-26 20:53:18.529432 I | raft: newRaft d9925a5c077e2b1a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-06-26 20:53:18.529447 I | raft: d9925a5c077e2b1a became follower at term 1
	2023-06-26 20:53:18.541213 W | auth: simple token is not cryptographically signed
	2023-06-26 20:53:18.545061 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-06-26 20:53:18.546217 I | etcdserver: d9925a5c077e2b1a as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-06-26 20:53:18.546534 I | etcdserver/membership: added member d9925a5c077e2b1a [https://192.168.72.111:2380] to cluster 5b15f244ed8f8770
	2023-06-26 20:53:18.548437 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-26 20:53:18.548840 I | embed: listening for metrics on http://192.168.72.111:2381
	2023-06-26 20:53:18.549174 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-26 20:53:19.029810 I | raft: d9925a5c077e2b1a is starting a new election at term 1
	2023-06-26 20:53:19.030004 I | raft: d9925a5c077e2b1a became candidate at term 2
	2023-06-26 20:53:19.030037 I | raft: d9925a5c077e2b1a received MsgVoteResp from d9925a5c077e2b1a at term 2
	2023-06-26 20:53:19.030072 I | raft: d9925a5c077e2b1a became leader at term 2
	2023-06-26 20:53:19.030089 I | raft: raft.node: d9925a5c077e2b1a elected leader d9925a5c077e2b1a at term 2
	2023-06-26 20:53:19.030324 I | etcdserver: setting up the initial cluster version to 3.3
	2023-06-26 20:53:19.031792 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-06-26 20:53:19.031828 I | etcdserver/api: enabled capabilities for version 3.3
	2023-06-26 20:53:19.031853 I | etcdserver: published {Name:old-k8s-version-490377 ClientURLs:[https://192.168.72.111:2379]} to cluster 5b15f244ed8f8770
	2023-06-26 20:53:19.031859 I | embed: ready to serve client requests
	2023-06-26 20:53:19.032945 I | embed: ready to serve client requests
	2023-06-26 20:53:19.033201 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-26 20:53:19.034184 I | embed: serving client requests on 192.168.72.111:2379
	2023-06-26 21:03:19.072777 I | mvcc: store.index: compact 680
	2023-06-26 21:03:19.075082 I | mvcc: finished scheduled compaction at 680 (took 1.47525ms)
	
	* 
	* ==> kernel <==
	*  21:06:46 up 19 min,  0 users,  load average: 0.13, 0.20, 0.20
	Linux old-k8s-version-490377 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [59fe9451027f9090f9a1ccf165202781515bfba29a84a61587e53981bceb9365] <==
	* I0626 20:59:23.205643       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 20:59:23.205799       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 20:59:23.205837       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 20:59:23.205848       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:01:23.206477       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 21:01:23.206594       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 21:01:23.206652       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:01:23.206660       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:03:23.208071       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 21:03:23.208238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 21:03:23.208307       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:03:23.208315       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:04:23.208637       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 21:04:23.208736       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 21:04:23.208785       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:04:23.208792       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:06:23.209321       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0626 21:06:23.209805       1 handler_proxy.go:99] no RequestInfo found in the context
	E0626 21:06:23.209995       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:06:23.210062       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d5bf95816703aeea82f3e0fdf7021f0f885ca32fa66fd148c74f6148d548f3da] <==
	* E0626 21:00:16.056614       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:00:39.794580       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:00:46.309259       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:01:11.796340       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:01:16.562748       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:01:43.799079       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:01:46.815243       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:02:15.801644       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:02:17.067359       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0626 21:02:47.319986       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:02:47.804768       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:03:17.572220       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:03:19.806764       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:03:47.825369       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:03:51.809097       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:04:18.077171       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:04:23.811165       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:04:48.329317       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:04:55.813384       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:05:18.581494       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:05:27.815534       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:05:48.834315       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:05:59.817782       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0626 21:06:19.086512       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0626 21:06:31.819993       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [974041d011ecf30ed0c693662c28fa10ece2f4e4cd674b2bb6c935464c63bb41] <==
	* W0626 20:53:45.241750       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0626 20:53:45.262126       1 node.go:135] Successfully retrieved node IP: 192.168.72.111
	I0626 20:53:45.262183       1 server_others.go:149] Using iptables Proxier.
	I0626 20:53:45.263235       1 server.go:529] Version: v1.16.0
	I0626 20:53:45.264926       1 config.go:131] Starting endpoints config controller
	I0626 20:53:45.264975       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0626 20:53:45.265290       1 config.go:313] Starting service config controller
	I0626 20:53:45.265333       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0626 20:53:45.365289       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0626 20:53:45.365801       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [eee0db517063aaa9716078c3b1ba85128c04af967f8a412e67ef928d7118c3ae] <==
	* I0626 20:53:22.240630       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0626 20:53:22.290505       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:53:22.290617       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:53:22.290660       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:53:22.290696       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:53:22.290720       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:53:22.290747       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:53:22.291484       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:53:22.291517       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:53:22.291543       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:53:22.292345       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:22.295187       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:23.292183       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:53:23.293339       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:53:23.295005       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:53:23.301821       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:53:23.302046       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:53:23.303014       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:53:23.303162       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:53:23.304587       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:23.304658       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:53:23.305283       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:53:23.306174       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:53:42.346521       1 factory.go:585] pod is already present in the activeQ
	E0626 20:53:42.392230       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:47:27 UTC, ends at Mon 2023-06-26 21:06:47 UTC. --
	Jun 26 21:02:16 old-k8s-version-490377 kubelet[3235]: E0626 21:02:16.994654    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:28 old-k8s-version-490377 kubelet[3235]: E0626 21:02:28.995010    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:43 old-k8s-version-490377 kubelet[3235]: E0626 21:02:43.994984    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:02:54 old-k8s-version-490377 kubelet[3235]: E0626 21:02:54.994717    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:08 old-k8s-version-490377 kubelet[3235]: E0626 21:03:08.994789    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:16 old-k8s-version-490377 kubelet[3235]: E0626 21:03:16.073689    3235 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jun 26 21:03:19 old-k8s-version-490377 kubelet[3235]: E0626 21:03:19.995285    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:32 old-k8s-version-490377 kubelet[3235]: E0626 21:03:32.994604    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:44 old-k8s-version-490377 kubelet[3235]: E0626 21:03:44.994867    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:03:57 old-k8s-version-490377 kubelet[3235]: E0626 21:03:57.994774    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:04:08 old-k8s-version-490377 kubelet[3235]: E0626 21:04:08.995058    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:04:23 old-k8s-version-490377 kubelet[3235]: E0626 21:04:23.994589    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:04:39 old-k8s-version-490377 kubelet[3235]: E0626 21:04:39.015688    3235 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:04:39 old-k8s-version-490377 kubelet[3235]: E0626 21:04:39.015760    3235 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:04:39 old-k8s-version-490377 kubelet[3235]: E0626 21:04:39.015810    3235 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:04:39 old-k8s-version-490377 kubelet[3235]: E0626 21:04:39.015836    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jun 26 21:04:53 old-k8s-version-490377 kubelet[3235]: E0626 21:04:53.996842    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:05:07 old-k8s-version-490377 kubelet[3235]: E0626 21:05:07.994945    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:05:18 old-k8s-version-490377 kubelet[3235]: E0626 21:05:18.995445    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:05:29 old-k8s-version-490377 kubelet[3235]: E0626 21:05:29.994602    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:05:40 old-k8s-version-490377 kubelet[3235]: E0626 21:05:40.995113    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:05:55 old-k8s-version-490377 kubelet[3235]: E0626 21:05:55.995688    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:06:08 old-k8s-version-490377 kubelet[3235]: E0626 21:06:08.994681    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:06:21 old-k8s-version-490377 kubelet[3235]: E0626 21:06:21.995180    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jun 26 21:06:34 old-k8s-version-490377 kubelet[3235]: E0626 21:06:34.995019    3235 pod_workers.go:191] Error syncing pod a51799c8-5cb6-42eb-85f0-508d0303445f ("metrics-server-74d5856cc6-bvbnj_kube-system(a51799c8-5cb6-42eb-85f0-508d0303445f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [e4c63b2286876f7f5a8323fd15e8be795f1ddd9f9f537c7275343e45a03e4a9f] <==
	* I0626 20:53:45.674862       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:53:45.698296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:53:45.698366       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:53:45.720362       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:53:45.724520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-490377_56252732-3b71-44ba-b8a6-626850ffffd7!
	I0626 20:53:45.725580       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1489b44b-117b-4ea6-bf06-8c5fb249f56c", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-490377_56252732-3b71-44ba-b8a6-626850ffffd7 became leader
	I0626 20:53:45.826824       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-490377_56252732-3b71-44ba-b8a6-626850ffffd7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490377 -n old-k8s-version-490377
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-490377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-bvbnj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-490377 describe pod metrics-server-74d5856cc6-bvbnj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-490377 describe pod metrics-server-74d5856cc6-bvbnj: exit status 1 (65.908564ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-bvbnj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-490377 describe pod metrics-server-74d5856cc6-bvbnj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (175.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (210.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-934450 -n no-preload-934450
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:09:19.18712951 +0000 UTC m=+5623.687157340
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-934450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-934450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.611µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-934450 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-934450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-934450 logs -n 25: (1.10696683s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490377             | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 21:06 UTC | 26 Jun 23 21:06 UTC |
	| start   | -p newest-cni-421460 --memory=2200 --alsologtostderr   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:06 UTC | 26 Jun 23 21:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-421460             | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:07 UTC | 26 Jun 23 21:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:07 UTC | 26 Jun 23 21:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-421460                  | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-421460 --memory=2200 --alsologtostderr   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-421460 sudo                              | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| delete  | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| start   | -p auto-606105 --memory=3072                           | auto-606105                  | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 21:09:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 21:09:02.986722   53649 out.go:296] Setting OutFile to fd 1 ...
	I0626 21:09:02.986834   53649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 21:09:02.986845   53649 out.go:309] Setting ErrFile to fd 2...
	I0626 21:09:02.986852   53649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 21:09:02.986974   53649 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 21:09:02.987563   53649 out.go:303] Setting JSON to false
	I0626 21:09:02.988514   53649 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6690,"bootTime":1687807053,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 21:09:02.988573   53649 start.go:137] virtualization: kvm guest
	I0626 21:09:02.991005   53649 out.go:177] * [auto-606105] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 21:09:02.992698   53649 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 21:09:02.994179   53649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 21:09:02.992741   53649 notify.go:220] Checking for updates...
	I0626 21:09:02.995838   53649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 21:09:02.997269   53649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:02.998574   53649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 21:09:02.999987   53649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 21:09:03.001779   53649 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:03.001870   53649 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:03.001952   53649 config.go:182] Loaded profile config "no-preload-934450": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:03.002032   53649 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 21:09:03.038454   53649 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 21:09:03.039770   53649 start.go:297] selected driver: kvm2
	I0626 21:09:03.039784   53649 start.go:954] validating driver "kvm2" against <nil>
	I0626 21:09:03.039794   53649 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 21:09:03.040397   53649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 21:09:03.040486   53649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 21:09:03.056762   53649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 21:09:03.056807   53649 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 21:09:03.057045   53649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 21:09:03.057073   53649 cni.go:84] Creating CNI manager for ""
	I0626 21:09:03.057083   53649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 21:09:03.057091   53649 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0626 21:09:03.057108   53649 start_flags.go:319] config:
	{Name:auto-606105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 21:09:03.057267   53649 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 21:09:03.059171   53649 out.go:177] * Starting control plane node auto-606105 in cluster auto-606105
	I0626 21:09:03.060567   53649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 21:09:03.060601   53649 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 21:09:03.060617   53649 cache.go:57] Caching tarball of preloaded images
	I0626 21:09:03.060715   53649 preload.go:174] Found /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 21:09:03.060731   53649 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 21:09:03.060829   53649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/config.json ...
	I0626 21:09:03.060846   53649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/auto-606105/config.json: {Name:mka65ee4e37ed0449cc7ce0fd9daca0b344c2235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 21:09:03.060985   53649 start.go:365] acquiring machines lock for auto-606105: {Name:mk642eecf0515daf16e2fdae275a6737c9b4f437 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0626 21:09:03.061016   53649 start.go:369] acquired machines lock for "auto-606105" in 16.683µs
	I0626 21:09:03.061037   53649 start.go:93] Provisioning new machine with config: &{Name:auto-606105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.27.3 ClusterName:auto-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 21:09:03.061108   53649 start.go:125] createHost starting for "" (driver="kvm2")
	I0626 21:09:03.063814   53649 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0626 21:09:03.063943   53649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 21:09:03.063990   53649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 21:09:03.078180   53649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0626 21:09:03.078612   53649 main.go:141] libmachine: () Calling .GetVersion
	I0626 21:09:03.079191   53649 main.go:141] libmachine: Using API Version  1
	I0626 21:09:03.079213   53649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 21:09:03.079539   53649 main.go:141] libmachine: () Calling .GetMachineName
	I0626 21:09:03.079719   53649 main.go:141] libmachine: (auto-606105) Calling .GetMachineName
	I0626 21:09:03.079860   53649 main.go:141] libmachine: (auto-606105) Calling .DriverName
	I0626 21:09:03.080029   53649 start.go:159] libmachine.API.Create for "auto-606105" (driver="kvm2")
	I0626 21:09:03.080062   53649 client.go:168] LocalClient.Create starting
	I0626 21:09:03.080099   53649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/ca.pem
	I0626 21:09:03.080143   53649 main.go:141] libmachine: Decoding PEM data...
	I0626 21:09:03.080159   53649 main.go:141] libmachine: Parsing certificate...
	I0626 21:09:03.080207   53649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-7242/.minikube/certs/cert.pem
	I0626 21:09:03.080225   53649 main.go:141] libmachine: Decoding PEM data...
	I0626 21:09:03.080236   53649 main.go:141] libmachine: Parsing certificate...
	I0626 21:09:03.080252   53649 main.go:141] libmachine: Running pre-create checks...
	I0626 21:09:03.080262   53649 main.go:141] libmachine: (auto-606105) Calling .PreCreateCheck
	I0626 21:09:03.080564   53649 main.go:141] libmachine: (auto-606105) Calling .GetConfigRaw
	I0626 21:09:03.080965   53649 main.go:141] libmachine: Creating machine...
	I0626 21:09:03.080979   53649 main.go:141] libmachine: (auto-606105) Calling .Create
	I0626 21:09:03.081119   53649 main.go:141] libmachine: (auto-606105) Creating KVM machine...
	I0626 21:09:03.082425   53649 main.go:141] libmachine: (auto-606105) DBG | found existing default KVM network
	I0626 21:09:03.083649   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.083514   53672 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:af:bd} reservation:<nil>}
	I0626 21:09:03.084562   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.084458   53672 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d5:10:e0} reservation:<nil>}
	I0626 21:09:03.085495   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.085435   53672 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:95:29} reservation:<nil>}
	I0626 21:09:03.086632   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.086558   53672 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000303270}
	I0626 21:09:03.091518   53649 main.go:141] libmachine: (auto-606105) DBG | trying to create private KVM network mk-auto-606105 192.168.72.0/24...
	I0626 21:09:03.164914   53649 main.go:141] libmachine: (auto-606105) Setting up store path in /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105 ...
	I0626 21:09:03.164965   53649 main.go:141] libmachine: (auto-606105) Building disk image from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 21:09:03.164978   53649 main.go:141] libmachine: (auto-606105) DBG | private KVM network mk-auto-606105 192.168.72.0/24 created
	I0626 21:09:03.164997   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.164869   53672 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:03.165027   53649 main.go:141] libmachine: (auto-606105) Downloading /home/jenkins/minikube-integration/16761-7242/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso...
	I0626 21:09:03.358985   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.358868   53672 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/id_rsa...
	I0626 21:09:03.618230   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.618117   53672 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/auto-606105.rawdisk...
	I0626 21:09:03.618259   53649 main.go:141] libmachine: (auto-606105) DBG | Writing magic tar header
	I0626 21:09:03.618269   53649 main.go:141] libmachine: (auto-606105) DBG | Writing SSH key tar header
	I0626 21:09:03.618280   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:03.618218   53672 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105 ...
	I0626 21:09:03.618297   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105
	I0626 21:09:03.618347   53649 main.go:141] libmachine: (auto-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105 (perms=drwx------)
	I0626 21:09:03.618367   53649 main.go:141] libmachine: (auto-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube/machines (perms=drwxr-xr-x)
	I0626 21:09:03.618380   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube/machines
	I0626 21:09:03.618412   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:03.618423   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16761-7242
	I0626 21:09:03.618439   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0626 21:09:03.618455   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home/jenkins
	I0626 21:09:03.618471   53649 main.go:141] libmachine: (auto-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242/.minikube (perms=drwxr-xr-x)
	I0626 21:09:03.618483   53649 main.go:141] libmachine: (auto-606105) Setting executable bit set on /home/jenkins/minikube-integration/16761-7242 (perms=drwxrwxr-x)
	I0626 21:09:03.618492   53649 main.go:141] libmachine: (auto-606105) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0626 21:09:03.618499   53649 main.go:141] libmachine: (auto-606105) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0626 21:09:03.618513   53649 main.go:141] libmachine: (auto-606105) Creating domain...
	I0626 21:09:03.618527   53649 main.go:141] libmachine: (auto-606105) DBG | Checking permissions on dir: /home
	I0626 21:09:03.618538   53649 main.go:141] libmachine: (auto-606105) DBG | Skipping /home - not owner
	I0626 21:09:03.619558   53649 main.go:141] libmachine: (auto-606105) define libvirt domain using xml: 
	I0626 21:09:03.619588   53649 main.go:141] libmachine: (auto-606105) <domain type='kvm'>
	I0626 21:09:03.619600   53649 main.go:141] libmachine: (auto-606105)   <name>auto-606105</name>
	I0626 21:09:03.619616   53649 main.go:141] libmachine: (auto-606105)   <memory unit='MiB'>3072</memory>
	I0626 21:09:03.619634   53649 main.go:141] libmachine: (auto-606105)   <vcpu>2</vcpu>
	I0626 21:09:03.619646   53649 main.go:141] libmachine: (auto-606105)   <features>
	I0626 21:09:03.619652   53649 main.go:141] libmachine: (auto-606105)     <acpi/>
	I0626 21:09:03.619657   53649 main.go:141] libmachine: (auto-606105)     <apic/>
	I0626 21:09:03.619662   53649 main.go:141] libmachine: (auto-606105)     <pae/>
	I0626 21:09:03.619667   53649 main.go:141] libmachine: (auto-606105)     
	I0626 21:09:03.619676   53649 main.go:141] libmachine: (auto-606105)   </features>
	I0626 21:09:03.619681   53649 main.go:141] libmachine: (auto-606105)   <cpu mode='host-passthrough'>
	I0626 21:09:03.619694   53649 main.go:141] libmachine: (auto-606105)   
	I0626 21:09:03.619703   53649 main.go:141] libmachine: (auto-606105)   </cpu>
	I0626 21:09:03.619749   53649 main.go:141] libmachine: (auto-606105)   <os>
	I0626 21:09:03.619777   53649 main.go:141] libmachine: (auto-606105)     <type>hvm</type>
	I0626 21:09:03.619793   53649 main.go:141] libmachine: (auto-606105)     <boot dev='cdrom'/>
	I0626 21:09:03.619806   53649 main.go:141] libmachine: (auto-606105)     <boot dev='hd'/>
	I0626 21:09:03.619821   53649 main.go:141] libmachine: (auto-606105)     <bootmenu enable='no'/>
	I0626 21:09:03.619834   53649 main.go:141] libmachine: (auto-606105)   </os>
	I0626 21:09:03.619864   53649 main.go:141] libmachine: (auto-606105)   <devices>
	I0626 21:09:03.619882   53649 main.go:141] libmachine: (auto-606105)     <disk type='file' device='cdrom'>
	I0626 21:09:03.619898   53649 main.go:141] libmachine: (auto-606105)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/boot2docker.iso'/>
	I0626 21:09:03.619918   53649 main.go:141] libmachine: (auto-606105)       <target dev='hdc' bus='scsi'/>
	I0626 21:09:03.619930   53649 main.go:141] libmachine: (auto-606105)       <readonly/>
	I0626 21:09:03.619942   53649 main.go:141] libmachine: (auto-606105)     </disk>
	I0626 21:09:03.619966   53649 main.go:141] libmachine: (auto-606105)     <disk type='file' device='disk'>
	I0626 21:09:03.619988   53649 main.go:141] libmachine: (auto-606105)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0626 21:09:03.620006   53649 main.go:141] libmachine: (auto-606105)       <source file='/home/jenkins/minikube-integration/16761-7242/.minikube/machines/auto-606105/auto-606105.rawdisk'/>
	I0626 21:09:03.620018   53649 main.go:141] libmachine: (auto-606105)       <target dev='hda' bus='virtio'/>
	I0626 21:09:03.620032   53649 main.go:141] libmachine: (auto-606105)     </disk>
	I0626 21:09:03.620044   53649 main.go:141] libmachine: (auto-606105)     <interface type='network'>
	I0626 21:09:03.620076   53649 main.go:141] libmachine: (auto-606105)       <source network='mk-auto-606105'/>
	I0626 21:09:03.620100   53649 main.go:141] libmachine: (auto-606105)       <model type='virtio'/>
	I0626 21:09:03.620113   53649 main.go:141] libmachine: (auto-606105)     </interface>
	I0626 21:09:03.620124   53649 main.go:141] libmachine: (auto-606105)     <interface type='network'>
	I0626 21:09:03.620145   53649 main.go:141] libmachine: (auto-606105)       <source network='default'/>
	I0626 21:09:03.620160   53649 main.go:141] libmachine: (auto-606105)       <model type='virtio'/>
	I0626 21:09:03.620176   53649 main.go:141] libmachine: (auto-606105)     </interface>
	I0626 21:09:03.620191   53649 main.go:141] libmachine: (auto-606105)     <serial type='pty'>
	I0626 21:09:03.620209   53649 main.go:141] libmachine: (auto-606105)       <target port='0'/>
	I0626 21:09:03.620220   53649 main.go:141] libmachine: (auto-606105)     </serial>
	I0626 21:09:03.620231   53649 main.go:141] libmachine: (auto-606105)     <console type='pty'>
	I0626 21:09:03.620243   53649 main.go:141] libmachine: (auto-606105)       <target type='serial' port='0'/>
	I0626 21:09:03.620277   53649 main.go:141] libmachine: (auto-606105)     </console>
	I0626 21:09:03.620296   53649 main.go:141] libmachine: (auto-606105)     <rng model='virtio'>
	I0626 21:09:03.620311   53649 main.go:141] libmachine: (auto-606105)       <backend model='random'>/dev/random</backend>
	I0626 21:09:03.620321   53649 main.go:141] libmachine: (auto-606105)     </rng>
	I0626 21:09:03.620330   53649 main.go:141] libmachine: (auto-606105)     
	I0626 21:09:03.620337   53649 main.go:141] libmachine: (auto-606105)     
	I0626 21:09:03.620343   53649 main.go:141] libmachine: (auto-606105)   </devices>
	I0626 21:09:03.620349   53649 main.go:141] libmachine: (auto-606105) </domain>
	I0626 21:09:03.620357   53649 main.go:141] libmachine: (auto-606105) 
	I0626 21:09:03.624440   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:06:3e:e9 in network default
	I0626 21:09:03.624960   53649 main.go:141] libmachine: (auto-606105) Ensuring networks are active...
	I0626 21:09:03.624986   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:03.625650   53649 main.go:141] libmachine: (auto-606105) Ensuring network default is active
	I0626 21:09:03.625968   53649 main.go:141] libmachine: (auto-606105) Ensuring network mk-auto-606105 is active
	I0626 21:09:03.626543   53649 main.go:141] libmachine: (auto-606105) Getting domain xml...
	I0626 21:09:03.627234   53649 main.go:141] libmachine: (auto-606105) Creating domain...
	I0626 21:09:04.882671   53649 main.go:141] libmachine: (auto-606105) Waiting to get IP...
	I0626 21:09:04.883577   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:04.884050   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:04.884078   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:04.884016   53672 retry.go:31] will retry after 252.030241ms: waiting for machine to come up
	I0626 21:09:05.137367   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:05.137901   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:05.137929   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:05.137860   53672 retry.go:31] will retry after 234.494841ms: waiting for machine to come up
	I0626 21:09:05.374427   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:05.374936   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:05.374964   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:05.374894   53672 retry.go:31] will retry after 379.887597ms: waiting for machine to come up
	I0626 21:09:05.756432   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:05.756841   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:05.756869   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:05.756815   53672 retry.go:31] will retry after 574.154872ms: waiting for machine to come up
	I0626 21:09:06.332608   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:06.333058   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:06.333090   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:06.332994   53672 retry.go:31] will retry after 589.613924ms: waiting for machine to come up
	I0626 21:09:06.923721   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:06.924182   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:06.924217   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:06.924155   53672 retry.go:31] will retry after 694.066596ms: waiting for machine to come up
	I0626 21:09:07.619974   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:07.620508   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:07.620537   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:07.620465   53672 retry.go:31] will retry after 1.007383258s: waiting for machine to come up
	I0626 21:09:08.628894   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:08.629346   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:08.629374   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:08.629308   53672 retry.go:31] will retry after 1.042499988s: waiting for machine to come up
	I0626 21:09:09.672849   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:09.673427   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:09.673458   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:09.673362   53672 retry.go:31] will retry after 1.580166168s: waiting for machine to come up
	I0626 21:09:11.254856   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:11.255307   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:11.255340   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:11.255277   53672 retry.go:31] will retry after 1.786128427s: waiting for machine to come up
	I0626 21:09:13.042729   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:13.043206   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:13.043238   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:13.043179   53672 retry.go:31] will retry after 2.703421835s: waiting for machine to come up
	I0626 21:09:15.748764   53649 main.go:141] libmachine: (auto-606105) DBG | domain auto-606105 has defined MAC address 52:54:00:94:12:df in network mk-auto-606105
	I0626 21:09:15.749201   53649 main.go:141] libmachine: (auto-606105) DBG | unable to find current IP address of domain auto-606105 in network mk-auto-606105
	I0626 21:09:15.749231   53649 main.go:141] libmachine: (auto-606105) DBG | I0626 21:09:15.749148   53672 retry.go:31] will retry after 2.494518509s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:46:25 UTC, ends at Mon 2023-06-26 21:09:19 UTC. --
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.606763539Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:1a40d28499bb23517ef8ae19a0663a18ea7cae01e72a3f1946cc812a86351a95],Size_:747809,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,RepoTags:[registry.k8s.io/kube-proxy:v1.27.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f],Size_:72711972,Uid:nil,Username:,Spec:nil,},&Image{Id:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,RepoTags:[registry.k8s.io/kube-apiserver:v1.27.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef],Size_:122063395,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&I
mage{Id:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,RepoTags:[registry.k8s.io/kube-scheduler:v1.27.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439],Size_:59808647,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,RepoTags:[registry.k8s.io/kube-controller-manager:v1.27.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133],Size_:113916809,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3],Size_:53619376,Uid:nil,Username:,Spec:nil,},&Image{Id:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,RepoTags:[registry.k8s.io/etcd:3.5.7
-0],RepoDigests:[registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb],Size_:297081108,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=c56f0d4b-d8a0-4aaf-8b75-cfc1aab5f663 name=/runtime.v1.ImageService/ListImages
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.661842653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=28637749-ad09-4e10-9f89-6c5708177892 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.662096579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=28637749-ad09-4e10-9f89-6c5708177892 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.662415955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=28637749-ad09-4e10-9f89-6c5708177892 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.701696686Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f1d7e7f-d54f-48c0-8104-47bff128b5b5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.701787470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f1d7e7f-d54f-48c0-8104-47bff128b5b5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.702109454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f1d7e7f-d54f-48c0-8104-47bff128b5b5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.741525037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=160623d4-a3c3-4341-bcad-a24b28a4622f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.741618222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=160623d4-a3c3-4341-bcad-a24b28a4622f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.741770995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=160623d4-a3c3-4341-bcad-a24b28a4622f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.787510143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5358908f-edc9-4d30-bfa8-e55cac404528 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.787627277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5358908f-edc9-4d30-bfa8-e55cac404528 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.787957535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5358908f-edc9-4d30-bfa8-e55cac404528 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.835612290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1194712a-c73a-4288-8d14-3f51775eda96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.835695315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1194712a-c73a-4288-8d14-3f51775eda96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.835870583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1194712a-c73a-4288-8d14-3f51775eda96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.878971892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb9a6fa6-f6c0-43cb-8b29-bbb9a0adb48d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.879123966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb9a6fa6-f6c0-43cb-8b29-bbb9a0adb48d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.879398285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb9a6fa6-f6c0-43cb-8b29-bbb9a0adb48d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.919838209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=900e8bc3-cce9-4f8f-8a6c-7112daa947e1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.919922393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=900e8bc3-cce9-4f8f-8a6c-7112daa947e1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.920251247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=900e8bc3-cce9-4f8f-8a6c-7112daa947e1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.957122640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7701c214-4ba5-4767-9530-26e590dd05e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.957211323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7701c214-4ba5-4767-9530-26e590dd05e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:19 no-preload-934450 crio[733]: time="2023-06-26 21:09:19.957364367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec,PodSandboxId:f586bb4316f81a671a9c44f1f9c680cc087ae01753cc90f191235171ba0befb2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1687812744936347756,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: add6b7bd-e1b5-4520-a7e6-cf999357c2be,},Annotations:map[string]string{io.kubernetes.container.hash: c7dc5e16,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b,PodSandboxId:a82180c52ff4b3e6f52198c3e95dc5fa208e7fc7f874425824de8a7ab77ac1fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1687812743958397308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jhz99,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f79864b8-d96c-4d24-b6e4-a402081ad34a,},Annotations:map[string]string{io.kubernetes.container.hash: fb8b76d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf,PodSandboxId:c73b0e0df73f10897abb266a49610994c5eeb7628c05621bb6d5fccb3dd08024,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1687812743005750428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-xm96k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac95f06b-2ed5-4979-9282-f33eaa18dc7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5f100e52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1,PodSandboxId:1504527283bce3bd4478d7b3ee904db7350b3ad625c91309bffef3f5dd4dad9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1687812719906391862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dbdf45e56ba8e0c639b338901cdf9eec,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937,PodSandboxId:fae64fcf8ca2614f4af9274c397975bfa45ad8919213472c43cda5b6e86ee007,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1687812719663166631,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bd9f50bc68
d1da435361b17ce1b1686,},Annotations:map[string]string{io.kubernetes.container.hash: 8c737dda,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b,PodSandboxId:f49b16e9b327991fdb9ad30e3f026a85dfc490889abbb14113d891ff83e2fba0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1687812719694459653,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: aaa842b6ad2ebf51b4734dce426e3d04,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2,PodSandboxId:8796f4334146dd6b3654b687e888e5f3ee78bf459e68314b8af3332281f78d02,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1687812719451282980,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-934450,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be225edd155577110a13fdd1cc35615,},An
notations:map[string]string{io.kubernetes.container.hash: 74c2e9a8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7701c214-4ba5-4767-9530-26e590dd05e9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	cce86e4ac6d10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   f586bb4316f81
	d9a74ded05e96       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   16 minutes ago      Running             kube-proxy                0                   a82180c52ff4b
	3f594979249ec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   c73b0e0df73f1
	4bf419c5667b7       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   17 minutes ago      Running             kube-scheduler            2                   1504527283bce
	9c97d6872e3eb       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   17 minutes ago      Running             kube-controller-manager   2                   f49b16e9b3279
	677700e637cf7       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   17 minutes ago      Running             kube-apiserver            2                   fae64fcf8ca26
	d8bd0503ff17e       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   17 minutes ago      Running             etcd                      2                   8796f4334146d
	
	* 
	* ==> coredns [3f594979249ecd47c406fb534f1dc3c155ad4283b6c43e9c5460c6ea38b33bcf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:48469 - 43928 "HINFO IN 3872093173642719776.143745242958422132. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013394887s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-934450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-934450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=no-preload-934450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_52_07_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:52:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-934450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 21:09:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:07:44 +0000   Mon, 26 Jun 2023 20:52:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:07:44 +0000   Mon, 26 Jun 2023 20:52:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:07:44 +0000   Mon, 26 Jun 2023 20:52:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:07:44 +0000   Mon, 26 Jun 2023 20:52:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.38
	  Hostname:    no-preload-934450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 13027f91e921404a858d73b7fe3591c7
	  System UUID:                13027f91-e921-404a-858d-73b7fe3591c7
	  Boot ID:                    97e1de77-4b2f-4df0-b11c-e0ff0e97cf17
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-xm96k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-no-preload-934450                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-no-preload-934450             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-no-preload-934450    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-jhz99                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-no-preload-934450             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-74d5c6b9c-4dkpm               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node no-preload-934450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node no-preload-934450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node no-preload-934450 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node no-preload-934450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node no-preload-934450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node no-preload-934450 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17m                kubelet          Node no-preload-934450 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                17m                kubelet          Node no-preload-934450 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node no-preload-934450 event: Registered Node no-preload-934450 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun26 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071946] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.100875] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.344185] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143847] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.388945] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.661429] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.096753] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.139933] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.111150] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[  +0.213712] systemd-fstab-generator[718]: Ignoring "noauto" for root device
	[Jun26 20:47] systemd-fstab-generator[1248]: Ignoring "noauto" for root device
	[ +18.928703] kauditd_printk_skb: 29 callbacks suppressed
	[Jun26 20:51] systemd-fstab-generator[3853]: Ignoring "noauto" for root device
	[Jun26 20:52] systemd-fstab-generator[4184]: Ignoring "noauto" for root device
	[ +26.819780] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [d8bd0503ff17e896cadfd4a27341a5bee7e604646ce1c2561e79dad991e03ef2] <==
	* {"level":"info","ts":"2023-06-26T21:02:01.687Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":725,"took":"2.357682ms","hash":2735759071}
	{"level":"info","ts":"2023-06-26T21:02:01.687Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2735759071,"revision":725,"compact-revision":-1}
	{"level":"info","ts":"2023-06-26T21:07:01.695Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
	{"level":"info","ts":"2023-06-26T21:07:01.698Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":968,"took":"2.6026ms","hash":254870629}
	{"level":"info","ts":"2023-06-26T21:07:01.698Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":254870629,"revision":968,"compact-revision":725}
	{"level":"warn","ts":"2023-06-26T21:07:26.799Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.491998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:07:26.799Z","caller":"traceutil/trace.go:171","msg":"trace[798313945] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1232; }","duration":"131.750251ms","start":"2023-06-26T21:07:26.667Z","end":"2023-06-26T21:07:26.799Z","steps":["trace[798313945] 'range keys from in-memory index tree'  (duration: 131.326478ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:07:27.608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.192239ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13894881355424778264 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-74d5c6b9c-4dkpm.176c50bfa98f08eb\" mod_revision:992 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-74d5c6b9c-4dkpm.176c50bfa98f08eb\" value_size:639 lease:4671509318570002454 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-74d5c6b9c-4dkpm.176c50bfa98f08eb\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-06-26T21:07:27.608Z","caller":"traceutil/trace.go:171","msg":"trace[821877315] transaction","detail":"{read_only:false; response_revision:1233; number_of_response:1; }","duration":"384.705236ms","start":"2023-06-26T21:07:27.223Z","end":"2023-06-26T21:07:27.608Z","steps":["trace[821877315] 'process raft request'  (duration: 123.427748ms)","trace[821877315] 'compare'  (duration: 260.071897ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T21:07:27.608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:07:27.223Z","time spent":"384.803498ms","remote":"127.0.0.1:60236","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":733,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-74d5c6b9c-4dkpm.176c50bfa98f08eb\" mod_revision:992 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-74d5c6b9c-4dkpm.176c50bfa98f08eb\" value_size:639 lease:4671509318570002454 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-74d5c6b9c-4dkpm.176c50bfa98f08eb\" > >"}
	{"level":"warn","ts":"2023-06-26T21:07:27.863Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.435808ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13894881355424778265 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-934450\" mod_revision:1225 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-934450\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-934450\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-06-26T21:07:27.864Z","caller":"traceutil/trace.go:171","msg":"trace[652021278] linearizableReadLoop","detail":"{readStateIndex:1434; appliedIndex:1433; }","duration":"195.971514ms","start":"2023-06-26T21:07:27.668Z","end":"2023-06-26T21:07:27.864Z","steps":["trace[652021278] 'read index received'  (duration: 64.897059ms)","trace[652021278] 'applied index is now lower than readState.Index'  (duration: 131.073262ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T21:07:27.864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.168493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:07:27.864Z","caller":"traceutil/trace.go:171","msg":"trace[847059055] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1234; }","duration":"196.254632ms","start":"2023-06-26T21:07:27.668Z","end":"2023-06-26T21:07:27.864Z","steps":["trace[847059055] 'agreement among raft nodes before linearized reading'  (duration: 196.063601ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T21:07:27.864Z","caller":"traceutil/trace.go:171","msg":"trace[2133725201] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"381.2746ms","start":"2023-06-26T21:07:27.483Z","end":"2023-06-26T21:07:27.864Z","steps":["trace[2133725201] 'process raft request'  (duration: 249.720447ms)","trace[2133725201] 'compare'  (duration: 130.277625ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T21:07:27.864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:07:27.483Z","time spent":"381.385105ms","remote":"127.0.0.1:60282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-934450\" mod_revision:1225 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-934450\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-934450\" > >"}
	{"level":"info","ts":"2023-06-26T21:08:35.689Z","caller":"traceutil/trace.go:171","msg":"trace[529797330] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"723.616949ms","start":"2023-06-26T21:08:34.965Z","end":"2023-06-26T21:08:35.689Z","steps":["trace[529797330] 'process raft request'  (duration: 723.492433ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:35.689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:08:34.965Z","time spent":"723.871496ms","remote":"127.0.0.1:60254","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1287 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-06-26T21:08:35.690Z","caller":"traceutil/trace.go:171","msg":"trace[857328958] linearizableReadLoop","detail":"{readStateIndex:1503; appliedIndex:1503; }","duration":"135.725128ms","start":"2023-06-26T21:08:35.554Z","end":"2023-06-26T21:08:35.690Z","steps":["trace[857328958] 'read index received'  (duration: 135.721519ms)","trace[857328958] 'applied index is now lower than readState.Index'  (duration: 2.914µs)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T21:08:35.690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.840021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:08:35.690Z","caller":"traceutil/trace.go:171","msg":"trace[190658169] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1290; }","duration":"135.871886ms","start":"2023-06-26T21:08:35.554Z","end":"2023-06-26T21:08:35.690Z","steps":["trace[190658169] 'agreement among raft nodes before linearized reading'  (duration: 135.801617ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:36.057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.327231ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13894881355424778609 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:40d488f97a4ac970>","response":"size:40"}
	{"level":"warn","ts":"2023-06-26T21:08:36.058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:08:35.697Z","time spent":"360.431623ms","remote":"127.0.0.1:60232","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-06-26T21:08:36.336Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.782643ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13894881355424778610 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.38\" mod_revision:1282 > success:<request_put:<key:\"/registry/masterleases/192.168.50.38\" value_size:66 lease:4671509318570002800 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.38\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-06-26T21:08:36.336Z","caller":"traceutil/trace.go:171","msg":"trace[386195189] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"277.28281ms","start":"2023-06-26T21:08:36.059Z","end":"2023-06-26T21:08:36.336Z","steps":["trace[386195189] 'process raft request'  (duration: 127.057705ms)","trace[386195189] 'compare'  (duration: 149.614563ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:09:20 up 23 min,  0 users,  load average: 0.01, 0.06, 0.13
	Linux no-preload-934450 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [677700e637cf7e97cea23d10cccf1d61d2d36c25abfb8d6f34a28f4a3b75c937] <==
	* E0626 21:07:04.807162       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:07:04.807222       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:07:04.807321       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:07:04.807380       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:07:04.808569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:08:03.645569       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.25.93:443: connect: connection refused
	I0626 21:08:03.645644       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:08:04.807616       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:08:04.807743       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:08:04.807759       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:08:04.808965       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:08:04.809090       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:08:04.809131       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:08:35.690708       1 trace.go:219] Trace[671140079]: "Update" accept:application/json, */*,audit-id:978d6b14-61f3-4055-bd0e-5c43804049d4,client:192.168.50.38,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (26-Jun-2023 21:08:34.963) (total time: 727ms):
	Trace[671140079]: ["GuaranteedUpdate etcd3" audit-id:978d6b14-61f3-4055-bd0e-5c43804049d4,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 726ms (21:08:34.964)
	Trace[671140079]:  ---"Txn call completed" 725ms (21:08:35.690)]
	Trace[671140079]: [727.003953ms] [727.003953ms] END
	I0626 21:08:36.337913       1 trace.go:219] Trace[1545632410]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.38,type:*v1.Endpoints,resource:apiServerIPInfo (26-Jun-2023 21:08:35.695) (total time: 642ms):
	Trace[1545632410]: ---"Transaction prepared" 361ms (21:08:36.059)
	Trace[1545632410]: ---"Txn call completed" 278ms (21:08:36.337)
	Trace[1545632410]: [642.119964ms] [642.119964ms] END
	I0626 21:09:03.646274       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.25.93:443: connect: connection refused
	I0626 21:09:03.646398       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [9c97d6872e3eb04e05f2ab102d3cbaa9d193aa9529845225a58b9b917a8f2e3b] <==
	* W0626 21:03:19.388477       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:48.904460       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:49.397314       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:18.910204       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:19.406179       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:48.916139       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:49.418225       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:18.923199       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:19.429673       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:48.933150       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:49.440741       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:06:18.941332       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:06:19.452276       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:06:48.948731       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:06:49.467405       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:07:18.955680       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:07:19.476387       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:07:48.961762       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:07:49.485287       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:08:18.968552       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:08:19.498493       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:08:48.976943       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:08:49.510122       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:09:18.985314       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:09:19.519510       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [d9a74ded05e960f8d57eb09b275ed96f6c3808e3112bac810af09e80330b010b] <==
	* I0626 20:52:24.495965       1 node.go:141] Successfully retrieved node IP: 192.168.50.38
	I0626 20:52:24.496707       1 server_others.go:110] "Detected node IP" address="192.168.50.38"
	I0626 20:52:24.496752       1 server_others.go:554] "Using iptables proxy"
	I0626 20:52:24.670180       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:52:24.670232       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:52:24.670305       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:52:24.670787       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:52:24.670834       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:52:24.672712       1 config.go:188] "Starting service config controller"
	I0626 20:52:24.672754       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:52:24.677172       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:52:24.677243       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:52:24.682546       1 config.go:315] "Starting node config controller"
	I0626 20:52:24.682621       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:52:24.773081       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:52:24.778409       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:52:24.783070       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4bf419c5667b737b2a79eeecc4bfaf12c653fe86a0b52a2a2b90b9bb390227c1] <==
	* W0626 20:52:03.822238       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:03.822295       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:04.700227       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 20:52:04.700288       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 20:52:04.759847       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0626 20:52:04.759965       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0626 20:52:04.814460       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 20:52:04.814514       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0626 20:52:04.860165       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:04.860219       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:04.884732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 20:52:04.884788       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 20:52:04.933687       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:52:04.933765       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 20:52:04.957607       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:04.957664       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:04.976781       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:52:04.976842       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:52:04.984079       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:52:04.984117       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:52:05.087420       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:05.087477       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:05.315583       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:52:05.315638       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0626 20:52:08.292894       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:46:25 UTC, ends at Mon 2023-06-26 21:09:20 UTC. --
	Jun 26 21:07:07 no-preload-934450 kubelet[4191]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:07:07 no-preload-934450 kubelet[4191]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:07:07 no-preload-934450 kubelet[4191]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:07:07 no-preload-934450 kubelet[4191]: E0626 21:07:07.477583    4191 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jun 26 21:07:13 no-preload-934450 kubelet[4191]: E0626 21:07:13.204232    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:07:27 no-preload-934450 kubelet[4191]: E0626 21:07:27.206232    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:07:39 no-preload-934450 kubelet[4191]: E0626 21:07:39.203752    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:07:54 no-preload-934450 kubelet[4191]: E0626 21:07:54.203733    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:08:06 no-preload-934450 kubelet[4191]: E0626 21:08:06.226321    4191 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:08:06 no-preload-934450 kubelet[4191]: E0626 21:08:06.226417    4191 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:08:06 no-preload-934450 kubelet[4191]: E0626 21:08:06.226589    4191 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-md544,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-4dkpm_kube-system(2a86e50e-ef2a-442a-908f-d01b2292f977): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:08:06 no-preload-934450 kubelet[4191]: E0626 21:08:06.226640    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:08:07 no-preload-934450 kubelet[4191]: E0626 21:08:07.342354    4191 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:08:07 no-preload-934450 kubelet[4191]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:08:07 no-preload-934450 kubelet[4191]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:08:07 no-preload-934450 kubelet[4191]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:08:18 no-preload-934450 kubelet[4191]: E0626 21:08:18.203582    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:08:33 no-preload-934450 kubelet[4191]: E0626 21:08:33.209277    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:08:45 no-preload-934450 kubelet[4191]: E0626 21:08:45.204182    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:08:56 no-preload-934450 kubelet[4191]: E0626 21:08:56.206827    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	Jun 26 21:09:07 no-preload-934450 kubelet[4191]: E0626 21:09:07.343463    4191 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:09:07 no-preload-934450 kubelet[4191]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:09:07 no-preload-934450 kubelet[4191]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:09:07 no-preload-934450 kubelet[4191]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:09:10 no-preload-934450 kubelet[4191]: E0626 21:09:10.204203    4191 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-4dkpm" podUID=2a86e50e-ef2a-442a-908f-d01b2292f977
	
	* 
	* ==> storage-provisioner [cce86e4ac6d109ae1aa30358126f87059206346c854451aa6562bd9e16d3acec] <==
	* I0626 20:52:25.057729       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:52:25.074122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:52:25.074351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:52:25.089465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:52:25.092288       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-934450_376175c8-174a-41b5-aa54-24ec858da196!
	I0626 20:52:25.092545       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ee38453-12ec-41a3-9a9e-be92985c03a2", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-934450_376175c8-174a-41b5-aa54-24ec858da196 became leader
	I0626 20:52:25.192707       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-934450_376175c8-174a-41b5-aa54-24ec858da196!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-934450 -n no-preload-934450
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-934450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-4dkpm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-934450 describe pod metrics-server-74d5c6b9c-4dkpm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-934450 describe pod metrics-server-74d5c6b9c-4dkpm: exit status 1 (64.301071ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-4dkpm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-934450 describe pod metrics-server-74d5c6b9c-4dkpm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (210.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (210.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299839 -n embed-certs-299839
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-06-26 21:09:22.277608819 +0000 UTC m=+5626.777636643
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-299839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-299839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.371µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-299839 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-299839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-299839 logs -n 25: (1.047547372s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-299839            | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC | 26 Jun 23 20:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-473235  | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC | 26 Jun 23 20:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:42 UTC |                     |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-934450                  | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 20:43 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-299839                 | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-299839                                  | embed-certs-299839           | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-473235       | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-473235 | jenkins | v1.30.1 | 26 Jun 23 20:44 UTC | 26 Jun 23 20:52 UTC |
	|         | default-k8s-diff-port-473235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490377                              | old-k8s-version-490377       | jenkins | v1.30.1 | 26 Jun 23 21:06 UTC | 26 Jun 23 21:06 UTC |
	| start   | -p newest-cni-421460 --memory=2200 --alsologtostderr   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:06 UTC | 26 Jun 23 21:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-421460             | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:07 UTC | 26 Jun 23 21:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:07 UTC | 26 Jun 23 21:08 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-421460                  | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-421460 --memory=2200 --alsologtostderr   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-421460 sudo                              | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:08 UTC | 26 Jun 23 21:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| delete  | -p newest-cni-421460                                   | newest-cni-421460            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| start   | -p auto-606105 --memory=3072                           | auto-606105                  | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-934450                                   | no-preload-934450            | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC | 26 Jun 23 21:09 UTC |
	| start   | -p kindnet-606105                                      | kindnet-606105               | jenkins | v1.30.1 | 26 Jun 23 21:09 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 21:09:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 21:09:22.085716   54061 out.go:296] Setting OutFile to fd 1 ...
	I0626 21:09:22.085873   54061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 21:09:22.085884   54061 out.go:309] Setting ErrFile to fd 2...
	I0626 21:09:22.085891   54061 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 21:09:22.086015   54061 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 21:09:22.086619   54061 out.go:303] Setting JSON to false
	I0626 21:09:22.087611   54061 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6709,"bootTime":1687807053,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 21:09:22.087675   54061 start.go:137] virtualization: kvm guest
	I0626 21:09:22.090013   54061 out.go:177] * [kindnet-606105] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 21:09:22.091934   54061 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 21:09:22.091988   54061 notify.go:220] Checking for updates...
	I0626 21:09:22.093443   54061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 21:09:22.094856   54061 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 21:09:22.097331   54061 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 21:09:22.098677   54061 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 21:09:22.100111   54061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 21:09:22.101820   54061 config.go:182] Loaded profile config "auto-606105": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:22.101934   54061 config.go:182] Loaded profile config "default-k8s-diff-port-473235": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:22.102027   54061 config.go:182] Loaded profile config "embed-certs-299839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 21:09:22.102149   54061 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 21:09:22.139050   54061 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 21:09:22.140490   54061 start.go:297] selected driver: kvm2
	I0626 21:09:22.140508   54061 start.go:954] validating driver "kvm2" against <nil>
	I0626 21:09:22.140521   54061 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 21:09:22.141575   54061 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 21:09:22.141680   54061 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 21:09:22.156379   54061 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 21:09:22.156444   54061 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 21:09:22.156685   54061 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 21:09:22.156725   54061 cni.go:84] Creating CNI manager for "kindnet"
	I0626 21:09:22.156738   54061 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0626 21:09:22.156747   54061 start_flags.go:319] config:
	{Name:kindnet-606105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-606105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 21:09:22.156933   54061 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 21:09:22.158885   54061 out.go:177] * Starting control plane node kindnet-606105 in cluster kindnet-606105
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-06-26 20:46:44 UTC, ends at Mon 2023-06-26 21:09:23 UTC. --
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.284208796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ae9d11f2-e6a1-4e21-8d3c-bdbb7169e847 name=/runtime.v1.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.741901837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=96716a3b-3e22-41c7-9d47-b4cc4c2be196 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.741991002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=96716a3b-3e22-41c7-9d47-b4cc4c2be196 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.742166140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=96716a3b-3e22-41c7-9d47-b4cc4c2be196 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.777649437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=80ab2894-931d-4f63-b635-c3f06f8879fb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.777752927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=80ab2894-931d-4f63-b635-c3f06f8879fb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.777919945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=80ab2894-931d-4f63-b635-c3f06f8879fb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.812755312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef54fade-b68e-4a54-a6a4-a526a35a2b9c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.812843496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef54fade-b68e-4a54-a6a4-a526a35a2b9c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.813000935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef54fade-b68e-4a54-a6a4-a526a35a2b9c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.850904094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=11f213aa-1e30-44f8-a4d9-dff2e085e962 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.850967164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=11f213aa-1e30-44f8-a4d9-dff2e085e962 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.851140062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=11f213aa-1e30-44f8-a4d9-dff2e085e962 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.892092585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad0f38e1-2a16-498b-8cb8-e4837ce1cff4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.892281738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad0f38e1-2a16-498b-8cb8-e4837ce1cff4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.892441469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad0f38e1-2a16-498b-8cb8-e4837ce1cff4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.927447461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8cff8eb-71c5-4fca-9295-e199ff17f114 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.927593613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8cff8eb-71c5-4fca-9295-e199ff17f114 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.927760071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8cff8eb-71c5-4fca-9295-e199ff17f114 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.965746185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ba451993-6958-44a9-b6ca-dcfe65ffef63 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.965835645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ba451993-6958-44a9-b6ca-dcfe65ffef63 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.965995089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ba451993-6958-44a9-b6ca-dcfe65ffef63 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.995363301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=05ecc546-a045-415b-ba66-3f4ec3d69044 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.995509869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=05ecc546-a045-415b-ba66-3f4ec3d69044 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jun 26 21:09:22 embed-certs-299839 crio[740]: time="2023-06-26 21:09:22.995666584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6,PodSandboxId:7474cf64113f113657e919862cde97615f8a0bbf69bd073d5dedf69613f5d1a0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1687812756234774600,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51730db4-00b6-4240-917c-fed87615fd6e,},Annotations:map[string]string{io.kubernetes.container.hash: a5b8bc0a,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848,PodSandboxId:e267a7bb0e9d69029e300c23e8303f15d10c4b89c1d72f6f8253cd565ecae91a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1687812755823573852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-scfwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60aed765-875d-4023-9ce9-97b5a6a47995,},Annotations:map[string]string{io.kubernetes.container.hash: 6c4a37f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222,PodSandboxId:357dd9f10db5654a0810550a5e45fe1f56ffe7d3dfd666a6e73c3d1ec46bd308,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1687812755210396883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-tl42z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 429d2f2e-a161-4353-8a29-1a4f8ddb4cc8,},Annotations:map[string]string{io.kubernetes.container.hash: b8fc0e35,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30,PodSandboxId:1547cb51040eb904d188464b633adbe2beaef07207eba8efa18c795a3aaedf1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1687812731778108648,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-299839,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 2d6de7e6c5751e431a9ee06dd0d7ceee,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8,PodSandboxId:8912d1e8039d298f7c5958a3ad4b43e5ad7a65dfab582f055b016601be5948fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1687812731565600996,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-299839,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 916973a30c4bd49353b106072d59cc46,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58,PodSandboxId:45d98859a15fceb5152c2e51a077af78bd86ea2c947abedc63e22df78b22a2e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1687812731377697776,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 03830abe69457302243911b537c06ef5,},Annotations:map[string]string{io.kubernetes.container.hash: 19e1583a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84,PodSandboxId:cef0947b2f3743f87be4db35de5f80f3511f1cf59b96af4ce13359284ffd07c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1687812731405632634,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-299839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829ae1cb17e2bb94bba22c9e79b6c70
6,},Annotations:map[string]string{io.kubernetes.container.hash: d3109b24,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=05ecc546-a045-415b-ba66-3f4ec3d69044 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	f87813547f704       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   7474cf64113f1
	3aa7ee4c1eadc       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   16 minutes ago      Running             kube-proxy                0                   e267a7bb0e9d6
	f5850ea0b11e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   357dd9f10db56
	e492b7211ab33       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   17 minutes ago      Running             kube-controller-manager   2                   1547cb51040eb
	c6b6f0adc88c6       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   17 minutes ago      Running             kube-scheduler            2                   8912d1e8039d2
	e57e4ae17d5c5       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   17 minutes ago      Running             etcd                      2                   cef0947b2f374
	8f534a31963ab       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   17 minutes ago      Running             kube-apiserver            2                   45d98859a15fc
	
	* 
	* ==> coredns [f5850ea0b11e2af0f2a4dead86d41210800a381226d4e97f46d97f0bd9aa1222] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33072 - 16870 "HINFO IN 3099440260193012276.5770977196869146280. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015791912s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-299839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-299839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=embed-certs-299839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T20_52_19_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 20:52:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-299839
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 21:09:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 21:08:00 +0000   Mon, 26 Jun 2023 20:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 21:08:00 +0000   Mon, 26 Jun 2023 20:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 21:08:00 +0000   Mon, 26 Jun 2023 20:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 21:08:00 +0000   Mon, 26 Jun 2023 20:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    embed-certs-299839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a134bb846064955a35a246d03c68303
	  System UUID:                0a134bb8-4606-4955-a35a-246d03c68303
	  Boot ID:                    f1a5622f-2af5-4c66-aabf-2d107fda507d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-tl42z                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-299839                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-embed-certs-299839             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-embed-certs-299839    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-scfwr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-299839             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-74d5c6b9c-vkggw                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node embed-certs-299839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node embed-certs-299839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node embed-certs-299839 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17m   kubelet          Node embed-certs-299839 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                17m   kubelet          Node embed-certs-299839 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-299839 event: Registered Node embed-certs-299839 in Controller
	
	* 
	* ==> dmesg <==
	* [Jun26 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073051] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.217135] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.218815] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134412] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.551095] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.181934] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.119493] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.154677] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.135879] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +0.226171] systemd-fstab-generator[723]: Ignoring "noauto" for root device
	[Jun26 20:47] systemd-fstab-generator[941]: Ignoring "noauto" for root device
	[ +19.813990] kauditd_printk_skb: 34 callbacks suppressed
	[Jun26 20:52] systemd-fstab-generator[3710]: Ignoring "noauto" for root device
	[  +9.806377] systemd-fstab-generator[4038]: Ignoring "noauto" for root device
	[ +21.665236] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [e57e4ae17d5c541dd43372795d5032b9882a2a76c3b656408c7ff8f782a80f84] <==
	* {"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec92057c53901c6c","local-member-id":"9049a3446d48952a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T20:52:14.021Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T21:02:14.049Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":690}
	{"level":"info","ts":"2023-06-26T21:02:14.052Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":690,"took":"1.988775ms","hash":3570885676}
	{"level":"info","ts":"2023-06-26T21:02:14.052Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3570885676,"revision":690,"compact-revision":-1}
	{"level":"info","ts":"2023-06-26T21:07:14.057Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":934}
	{"level":"info","ts":"2023-06-26T21:07:14.060Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":934,"took":"1.160982ms","hash":2801579465}
	{"level":"info","ts":"2023-06-26T21:07:14.060Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2801579465,"revision":934,"compact-revision":690}
	{"level":"warn","ts":"2023-06-26T21:07:21.970Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.690711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:07:21.971Z","caller":"traceutil/trace.go:171","msg":"trace[1983784127] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1184; }","duration":"132.085806ms","start":"2023-06-26T21:07:21.839Z","end":"2023-06-26T21:07:21.971Z","steps":["trace[1983784127] 'count revisions from in-memory index tree'  (duration: 131.471663ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:07:26.277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.703652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:07:26.277Z","caller":"traceutil/trace.go:171","msg":"trace[553832068] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:1187; }","duration":"219.10434ms","start":"2023-06-26T21:07:26.058Z","end":"2023-06-26T21:07:26.277Z","steps":["trace[553832068] 'count revisions from in-memory index tree'  (duration: 218.393405ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:07:27.999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.916237ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10748554065756051648 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.51\" mod_revision:1180 > success:<request_put:<key:\"/registry/masterleases/192.168.39.51\" value_size:66 lease:1525182028901275838 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.51\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-06-26T21:07:27.999Z","caller":"traceutil/trace.go:171","msg":"trace[222719111] linearizableReadLoop","detail":"{readStateIndex:1385; appliedIndex:1384; }","duration":"100.370666ms","start":"2023-06-26T21:07:27.899Z","end":"2023-06-26T21:07:27.999Z","steps":["trace[222719111] 'read index received'  (duration: 110.641µs)","trace[222719111] 'applied index is now lower than readState.Index'  (duration: 100.257826ms)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T21:07:27.999Z","caller":"traceutil/trace.go:171","msg":"trace[1659570204] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"264.430491ms","start":"2023-06-26T21:07:27.735Z","end":"2023-06-26T21:07:27.999Z","steps":["trace[1659570204] 'process raft request'  (duration: 127.971438ms)","trace[1659570204] 'compare'  (duration: 134.672046ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T21:07:28.001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.843288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2023-06-26T21:07:28.001Z","caller":"traceutil/trace.go:171","msg":"trace[2035040190] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1188; }","duration":"101.963103ms","start":"2023-06-26T21:07:27.899Z","end":"2023-06-26T21:07:28.001Z","steps":["trace[2035040190] 'agreement among raft nodes before linearized reading'  (duration: 100.510429ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T21:08:35.087Z","caller":"traceutil/trace.go:171","msg":"trace[227240562] linearizableReadLoop","detail":"{readStateIndex:1454; appliedIndex:1453; }","duration":"424.713102ms","start":"2023-06-26T21:08:34.662Z","end":"2023-06-26T21:08:35.087Z","steps":["trace[227240562] 'read index received'  (duration: 424.548136ms)","trace[227240562] 'applied index is now lower than readState.Index'  (duration: 164.521µs)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T21:08:35.087Z","caller":"traceutil/trace.go:171","msg":"trace[604338584] transaction","detail":"{read_only:false; response_revision:1243; number_of_response:1; }","duration":"614.716335ms","start":"2023-06-26T21:08:34.472Z","end":"2023-06-26T21:08:35.087Z","steps":["trace[604338584] 'process raft request'  (duration: 614.314925ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:35.087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.163429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T21:08:35.088Z","caller":"traceutil/trace.go:171","msg":"trace[1121522812] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1243; }","duration":"425.785014ms","start":"2023-06-26T21:08:34.662Z","end":"2023-06-26T21:08:35.088Z","steps":["trace[1121522812] 'agreement among raft nodes before linearized reading'  (duration: 425.008526ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T21:08:35.088Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:08:34.662Z","time spent":"425.842847ms","remote":"127.0.0.1:38298","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-06-26T21:08:35.087Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-06-26T21:08:34.472Z","time spent":"614.829764ms","remote":"127.0.0.1:38282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1242 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-06-26T21:08:35.927Z","caller":"traceutil/trace.go:171","msg":"trace[945378458] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"135.265611ms","start":"2023-06-26T21:08:35.791Z","end":"2023-06-26T21:08:35.927Z","steps":["trace[945378458] 'process raft request'  (duration: 135.12347ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:09:23 up 22 min,  0 users,  load average: 0.50, 0.26, 0.19
	Linux embed-certs-299839 5.10.57 #1 SMP Thu Jun 22 21:22:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8f534a31963ab1af59d688bba6fee1ff29219410ab6038ff0fbb1e39a2a4ed58] <==
	* I0626 21:07:15.718568       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0626 21:07:15.843084       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.135.1:443: connect: connection refused
	I0626 21:07:15.843197       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:07:16.841987       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:07:16.842090       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:07:16.842162       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:07:16.842272       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:07:16.842349       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:07:16.843518       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:08:15.718321       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.135.1:443: connect: connection refused
	I0626 21:08:15.718379       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0626 21:08:16.843223       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:08:16.843543       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0626 21:08:16.843599       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0626 21:08:16.843784       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 21:08:16.843845       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 21:08:16.845557       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 21:08:35.088848       1 trace.go:219] Trace[2029112115]: "Update" accept:application/json, */*,audit-id:b33f0844-83bd-4e16-afb7-e405c3533ecc,client:192.168.39.51,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (26-Jun-2023 21:08:34.470) (total time: 618ms):
	Trace[2029112115]: ["GuaranteedUpdate etcd3" audit-id:b33f0844-83bd-4e16-afb7-e405c3533ecc,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 617ms (21:08:34.471)
	Trace[2029112115]:  ---"Txn call completed" 616ms (21:08:35.088)]
	Trace[2029112115]: [618.410737ms] [618.410737ms] END
	I0626 21:09:15.719105       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.135.1:443: connect: connection refused
	I0626 21:09:15.719182       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [e492b7211ab3376747c29a42150b985bb65e7906551a2271c0f34a7ee48d5b30] <==
	* W0626 21:03:01.302718       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:03:30.827333       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:03:31.310955       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:00.833347       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:01.320298       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:04:30.841640       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:04:31.331846       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:00.851785       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:01.341602       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:05:30.858608       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:05:31.351012       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:06:00.864799       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:06:01.361624       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:06:30.871253       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:06:31.370418       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:07:00.877864       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:07:01.384587       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:07:30.884664       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:07:31.393854       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:08:00.893136       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:08:01.403111       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:08:30.901725       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:08:31.413202       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0626 21:09:00.909361       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0626 21:09:01.424360       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [3aa7ee4c1eadcca8f13f0c9339e8f3af719313995cb82f73e107822b0d188848] <==
	* I0626 20:52:36.503863       1 node.go:141] Successfully retrieved node IP: 192.168.39.51
	I0626 20:52:36.504036       1 server_others.go:110] "Detected node IP" address="192.168.39.51"
	I0626 20:52:36.504108       1 server_others.go:554] "Using iptables proxy"
	I0626 20:52:36.565119       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0626 20:52:36.565216       1 server_others.go:192] "Using iptables Proxier"
	I0626 20:52:36.566137       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 20:52:36.567379       1 server.go:658] "Version info" version="v1.27.3"
	I0626 20:52:36.567430       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 20:52:36.569149       1 config.go:188] "Starting service config controller"
	I0626 20:52:36.569755       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 20:52:36.570082       1 config.go:97] "Starting endpoint slice config controller"
	I0626 20:52:36.570118       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 20:52:36.572296       1 config.go:315] "Starting node config controller"
	I0626 20:52:36.572337       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 20:52:36.670580       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 20:52:36.670596       1 shared_informer.go:318] Caches are synced for service config
	I0626 20:52:36.672538       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c6b6f0adc88c649a2ab288b175bd0b53488e75450c04c4391946e7b2f099ada8] <==
	* W0626 20:52:16.665087       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 20:52:16.665195       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 20:52:16.780738       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:16.780823       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:16.780885       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 20:52:16.780922       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0626 20:52:16.812837       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 20:52:16.813377       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 20:52:16.868594       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:16.868660       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:16.974699       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0626 20:52:16.974751       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0626 20:52:17.026047       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 20:52:17.026108       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 20:52:17.033006       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 20:52:17.033118       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 20:52:17.079585       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 20:52:17.079639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 20:52:17.084344       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:17.084422       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 20:52:17.181973       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 20:52:17.182080       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 20:52:17.218805       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 20:52:17.218976       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0626 20:52:20.023940       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-06-26 20:46:44 UTC, ends at Mon 2023-06-26 21:09:23 UTC. --
	Jun 26 21:07:19 embed-certs-299839 kubelet[4045]: E0626 21:07:19.715037    4045 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:07:19 embed-certs-299839 kubelet[4045]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:07:19 embed-certs-299839 kubelet[4045]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:07:19 embed-certs-299839 kubelet[4045]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:07:19 embed-certs-299839 kubelet[4045]: E0626 21:07:19.885109    4045 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jun 26 21:07:23 embed-certs-299839 kubelet[4045]: E0626 21:07:23.581012    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:07:36 embed-certs-299839 kubelet[4045]: E0626 21:07:36.586298    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:07:51 embed-certs-299839 kubelet[4045]: E0626 21:07:51.580272    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:08:02 embed-certs-299839 kubelet[4045]: E0626 21:08:02.580718    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:08:15 embed-certs-299839 kubelet[4045]: E0626 21:08:15.580753    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:08:19 embed-certs-299839 kubelet[4045]: E0626 21:08:19.707619    4045 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:08:19 embed-certs-299839 kubelet[4045]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:08:19 embed-certs-299839 kubelet[4045]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:08:19 embed-certs-299839 kubelet[4045]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 26 21:08:28 embed-certs-299839 kubelet[4045]: E0626 21:08:28.588840    4045 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:08:28 embed-certs-299839 kubelet[4045]: E0626 21:08:28.588899    4045 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 26 21:08:28 embed-certs-299839 kubelet[4045]: E0626 21:08:28.589088    4045 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9nmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-vkggw_kube-system(147679d1-7453-4e55-862c-fec18e08ba84): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 26 21:08:28 embed-certs-299839 kubelet[4045]: E0626 21:08:28.589130    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:08:43 embed-certs-299839 kubelet[4045]: E0626 21:08:43.581353    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:08:58 embed-certs-299839 kubelet[4045]: E0626 21:08:58.580657    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:09:11 embed-certs-299839 kubelet[4045]: E0626 21:09:11.582354    4045 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-vkggw" podUID=147679d1-7453-4e55-862c-fec18e08ba84
	Jun 26 21:09:19 embed-certs-299839 kubelet[4045]: E0626 21:09:19.704870    4045 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 26 21:09:19 embed-certs-299839 kubelet[4045]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 26 21:09:19 embed-certs-299839 kubelet[4045]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 26 21:09:19 embed-certs-299839 kubelet[4045]:  > table=nat chain=KUBE-KUBELET-CANARY
	
	* 
	* ==> storage-provisioner [f87813547f704bd868623cb649ea4760c636c7496eec6535bfd174f5fa1ef8e6] <==
	* I0626 20:52:36.414503       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 20:52:36.431317       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 20:52:36.432340       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 20:52:36.447880       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 20:52:36.448133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-299839_f79cd480-b3e0-448b-a8c4-e03ac591d538!
	I0626 20:52:36.450187       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"57ff7a0a-6fb7-4c94-ada5-fb66605cf24f", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-299839_f79cd480-b3e0-448b-a8c4-e03ac591d538 became leader
	I0626 20:52:36.549614       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-299839_f79cd480-b3e0-448b-a8c4-e03ac591d538!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-299839 -n embed-certs-299839
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-299839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-vkggw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-299839 describe pod metrics-server-74d5c6b9c-vkggw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-299839 describe pod metrics-server-74d5c6b9c-vkggw: exit status 1 (64.287233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-vkggw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-299839 describe pod metrics-server-74d5c6b9c-vkggw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (210.53s)

                                                
                                    

Test pass (231/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 29.27
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.27.3/json-events 17.57
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.52
20 TestOffline 129.1
22 TestAddons/Setup 157.46
24 TestAddons/parallel/Registry 16.74
26 TestAddons/parallel/InspektorGadget 11.06
27 TestAddons/parallel/MetricsServer 6.33
28 TestAddons/parallel/HelmTiller 12.59
30 TestAddons/parallel/CSI 123.58
31 TestAddons/parallel/Headlamp 14.38
32 TestAddons/parallel/CloudSpanner 5.38
35 TestAddons/serial/GCPAuth/Namespaces 0.12
37 TestCertOptions 81.42
38 TestCertExpiration 266.33
40 TestForceSystemdFlag 56.57
41 TestForceSystemdEnv 54.87
42 TestKVMDriverInstallOrUpdate 4.91
46 TestErrorSpam/setup 45.56
47 TestErrorSpam/start 0.32
48 TestErrorSpam/status 0.74
49 TestErrorSpam/pause 1.44
50 TestErrorSpam/unpause 1.56
51 TestErrorSpam/stop 2.2
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 100.19
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 32.91
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.08
62 TestFunctional/serial/CacheCmd/cache/add_remote 3
63 TestFunctional/serial/CacheCmd/cache/add_local 2.13
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
65 TestFunctional/serial/CacheCmd/cache/list 0.04
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
68 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/serial/MinikubeKubectlCmd 0.1
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
71 TestFunctional/serial/ExtraConfig 37.16
72 TestFunctional/serial/ComponentHealth 0.07
73 TestFunctional/serial/LogsCmd 1.34
74 TestFunctional/serial/LogsFileCmd 1.32
75 TestFunctional/serial/InvalidService 5
77 TestFunctional/parallel/ConfigCmd 0.27
78 TestFunctional/parallel/DashboardCmd 22.21
79 TestFunctional/parallel/DryRun 0.28
80 TestFunctional/parallel/InternationalLanguage 0.14
81 TestFunctional/parallel/StatusCmd 1.19
85 TestFunctional/parallel/ServiceCmdConnect 27.64
86 TestFunctional/parallel/AddonsCmd 0.11
87 TestFunctional/parallel/PersistentVolumeClaim 55
89 TestFunctional/parallel/SSHCmd 0.45
90 TestFunctional/parallel/CpCmd 0.91
91 TestFunctional/parallel/MySQL 28.39
92 TestFunctional/parallel/FileSync 0.2
93 TestFunctional/parallel/CertSync 1.4
97 TestFunctional/parallel/NodeLabels 0.09
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
101 TestFunctional/parallel/License 0.63
102 TestFunctional/parallel/ServiceCmd/DeployApp 28.28
112 TestFunctional/parallel/ServiceCmd/List 0.54
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
114 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
115 TestFunctional/parallel/ProfileCmd/profile_list 0.39
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
118 TestFunctional/parallel/MountCmd/any-port 9.91
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
122 TestFunctional/parallel/ServiceCmd/Format 0.41
123 TestFunctional/parallel/ServiceCmd/URL 0.46
124 TestFunctional/parallel/Version/short 0.04
125 TestFunctional/parallel/Version/components 1.02
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
131 TestFunctional/parallel/ImageCommands/Setup 2.15
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.88
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
134 TestFunctional/parallel/MountCmd/specific-port 1.96
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.28
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.09
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.28
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.97
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.63
141 TestFunctional/delete_addon-resizer_images 0.07
142 TestFunctional/delete_my-image_image 0.01
143 TestFunctional/delete_minikube_cached_images 0.01
147 TestIngressAddonLegacy/StartLegacyK8sCluster 120.87
149 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.31
150 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
154 TestJSONOutput/start/Command 99.94
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.62
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.59
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 7.08
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.17
182 TestMainNoArgs 0.04
183 TestMinikubeProfile 94.08
186 TestMountStart/serial/StartWithMountFirst 27.79
187 TestMountStart/serial/VerifyMountFirst 0.37
188 TestMountStart/serial/StartWithMountSecond 29.94
189 TestMountStart/serial/VerifyMountSecond 0.38
190 TestMountStart/serial/DeleteFirst 0.88
191 TestMountStart/serial/VerifyMountPostDelete 0.38
192 TestMountStart/serial/Stop 1.13
193 TestMountStart/serial/RestartStopped 23.78
194 TestMountStart/serial/VerifyMountPostStop 0.38
197 TestMultiNode/serial/FreshStart2Nodes 108.93
198 TestMultiNode/serial/DeployApp2Nodes 6.16
200 TestMultiNode/serial/AddNode 46.24
201 TestMultiNode/serial/ProfileList 0.2
202 TestMultiNode/serial/CopyFile 7.35
203 TestMultiNode/serial/StopNode 2.95
204 TestMultiNode/serial/StartAfterStop 33.13
206 TestMultiNode/serial/DeleteNode 1.73
208 TestMultiNode/serial/RestartMultiNode 440.55
209 TestMultiNode/serial/ValidateNameConflict 51.18
216 TestScheduledStopUnix 118.15
222 TestKubernetesUpgrade 216.78
235 TestPause/serial/Start 121.44
240 TestNetworkPlugins/group/false 2.76
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
246 TestNoKubernetes/serial/StartWithK8s 123.52
247 TestPause/serial/SecondStartNoReconfiguration 41.82
248 TestNoKubernetes/serial/StartWithStopK8s 42.95
249 TestPause/serial/Pause 0.83
250 TestPause/serial/VerifyStatus 0.27
251 TestPause/serial/Unpause 0.72
252 TestPause/serial/PauseAgain 0.96
253 TestPause/serial/DeletePaused 1.83
254 TestPause/serial/VerifyDeletedResources 0.63
255 TestNoKubernetes/serial/Start 56.59
256 TestStoppedBinaryUpgrade/Setup 2.22
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
259 TestNoKubernetes/serial/ProfileList 0.38
260 TestNoKubernetes/serial/Stop 1.18
263 TestStartStop/group/old-k8s-version/serial/FirstStart 129.26
264 TestStartStop/group/old-k8s-version/serial/DeployApp 11.48
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.39
269 TestStartStop/group/no-preload/serial/FirstStart 85.29
271 TestStartStop/group/embed-certs/serial/FirstStart 125.2
273 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 135.44
274 TestStartStop/group/no-preload/serial/DeployApp 11.58
275 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
277 TestStartStop/group/embed-certs/serial/DeployApp 11.53
279 TestStartStop/group/old-k8s-version/serial/SecondStart 792.83
280 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
282 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.46
283 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
286 TestStartStop/group/no-preload/serial/SecondStart 800.63
288 TestStartStop/group/embed-certs/serial/SecondStart 760.86
290 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 498.02
300 TestStartStop/group/newest-cni/serial/FirstStart 64.06
301 TestStartStop/group/newest-cni/serial/DeployApp 0
302 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
303 TestStartStop/group/newest-cni/serial/Stop 12.11
304 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
305 TestStartStop/group/newest-cni/serial/SecondStart 51.78
306 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
309 TestStartStop/group/newest-cni/serial/Pause 2.5
310 TestNetworkPlugins/group/auto/Start 100.57
311 TestNetworkPlugins/group/kindnet/Start 76.04
312 TestNetworkPlugins/group/calico/Start 123.19
313 TestNetworkPlugins/group/custom-flannel/Start 128.5
314 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
316 TestNetworkPlugins/group/kindnet/NetCatPod 12.51
317 TestNetworkPlugins/group/auto/KubeletFlags 0.22
318 TestNetworkPlugins/group/auto/NetCatPod 12.44
319 TestNetworkPlugins/group/kindnet/DNS 0.2
320 TestNetworkPlugins/group/kindnet/Localhost 0.2
321 TestNetworkPlugins/group/auto/DNS 0.22
322 TestNetworkPlugins/group/kindnet/HairPin 0.21
323 TestNetworkPlugins/group/auto/Localhost 0.2
324 TestNetworkPlugins/group/auto/HairPin 0.19
325 TestNetworkPlugins/group/enable-default-cni/Start 103.97
326 TestNetworkPlugins/group/flannel/Start 114.19
327 TestNetworkPlugins/group/calico/ControllerPod 5.02
328 TestNetworkPlugins/group/calico/KubeletFlags 0.2
329 TestNetworkPlugins/group/calico/NetCatPod 11.42
330 TestNetworkPlugins/group/calico/DNS 0.23
331 TestNetworkPlugins/group/calico/Localhost 0.19
332 TestNetworkPlugins/group/calico/HairPin 0.21
333 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
334 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.52
335 TestNetworkPlugins/group/custom-flannel/DNS 0.23
336 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
337 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
338 TestNetworkPlugins/group/bridge/Start 105.58
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.42
341 TestNetworkPlugins/group/flannel/ControllerPod 5.03
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
346 TestNetworkPlugins/group/flannel/NetCatPod 12.42
347 TestNetworkPlugins/group/flannel/DNS 0.17
348 TestNetworkPlugins/group/flannel/Localhost 0.14
349 TestNetworkPlugins/group/flannel/HairPin 0.14
350 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
351 TestNetworkPlugins/group/bridge/NetCatPod 12.37
352 TestNetworkPlugins/group/bridge/DNS 0.18
353 TestNetworkPlugins/group/bridge/Localhost 0.13
354 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (29.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-081510 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-081510 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (29.271820736s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (29.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-081510
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-081510: exit status 85 (54.49753ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:35 UTC |          |
	|         | -p download-only-081510        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 19:35:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 19:35:35.567216   14455 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:35:35.567422   14455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:35:35.567432   14455 out.go:309] Setting ErrFile to fd 2...
	I0626 19:35:35.567437   14455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:35:35.567537   14455 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	W0626 19:35:35.567682   14455 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16761-7242/.minikube/config/config.json: open /home/jenkins/minikube-integration/16761-7242/.minikube/config/config.json: no such file or directory
	I0626 19:35:35.568243   14455 out.go:303] Setting JSON to true
	I0626 19:35:35.569000   14455 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1083,"bootTime":1687807053,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:35:35.569056   14455 start.go:137] virtualization: kvm guest
	I0626 19:35:35.571375   14455 out.go:97] [download-only-081510] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:35:35.573053   14455 out.go:169] MINIKUBE_LOCATION=16761
	W0626 19:35:35.571472   14455 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball: no such file or directory
	I0626 19:35:35.571509   14455 notify.go:220] Checking for updates...
	I0626 19:35:35.575824   14455 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:35:35.577085   14455 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:35:35.578548   14455 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:35:35.579931   14455 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0626 19:35:35.582475   14455 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0626 19:35:35.582715   14455 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:35:35.687330   14455 out.go:97] Using the kvm2 driver based on user configuration
	I0626 19:35:35.687361   14455 start.go:297] selected driver: kvm2
	I0626 19:35:35.687367   14455 start.go:954] validating driver "kvm2" against <nil>
	I0626 19:35:35.687674   14455 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:35:35.687808   14455 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 19:35:35.702227   14455 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 19:35:35.702267   14455 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 19:35:35.702721   14455 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0626 19:35:35.702870   14455 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0626 19:35:35.702894   14455 cni.go:84] Creating CNI manager for ""
	I0626 19:35:35.702901   14455 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:35:35.702909   14455 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0626 19:35:35.702915   14455 start_flags.go:319] config:
	{Name:download-only-081510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-081510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:35:35.703120   14455 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:35:35.705073   14455 out.go:97] Downloading VM boot image ...
	I0626 19:35:35.705124   14455 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/iso/amd64/minikube-v1.30.1-1687455737-16703-amd64.iso
	I0626 19:35:45.457155   14455 out.go:97] Starting control plane node download-only-081510 in cluster download-only-081510
	I0626 19:35:45.457175   14455 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 19:35:45.567222   14455 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0626 19:35:45.567265   14455 cache.go:57] Caching tarball of preloaded images
	I0626 19:35:45.567400   14455 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 19:35:45.570746   14455 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0626 19:35:45.570764   14455 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:35:45.687776   14455 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-081510"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (17.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-081510 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-081510 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.568914094s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (17.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-081510
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-081510: exit status 85 (54.094731ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:35 UTC |          |
	|         | -p download-only-081510        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-081510 | jenkins | v1.30.1 | 26 Jun 23 19:36 UTC |          |
	|         | -p download-only-081510        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 19:36:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 19:36:04.894671   14556 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:36:04.894771   14556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:36:04.894780   14556 out.go:309] Setting ErrFile to fd 2...
	I0626 19:36:04.894784   14556 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:36:04.894881   14556 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	W0626 19:36:04.894984   14556 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16761-7242/.minikube/config/config.json: open /home/jenkins/minikube-integration/16761-7242/.minikube/config/config.json: no such file or directory
	I0626 19:36:04.895356   14556 out.go:303] Setting JSON to true
	I0626 19:36:04.896115   14556 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1112,"bootTime":1687807053,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:36:04.896166   14556 start.go:137] virtualization: kvm guest
	I0626 19:36:04.898255   14556 out.go:97] [download-only-081510] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:36:04.899866   14556 out.go:169] MINIKUBE_LOCATION=16761
	I0626 19:36:04.898435   14556 notify.go:220] Checking for updates...
	I0626 19:36:04.903069   14556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:36:04.904553   14556 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:36:04.906068   14556 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:36:04.907648   14556 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0626 19:36:04.910357   14556 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0626 19:36:04.910759   14556 config.go:182] Loaded profile config "download-only-081510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0626 19:36:04.910792   14556 start.go:862] api.Load failed for download-only-081510: filestore "download-only-081510": Docker machine "download-only-081510" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0626 19:36:04.910866   14556 driver.go:373] Setting default libvirt URI to qemu:///system
	W0626 19:36:04.910891   14556 start.go:862] api.Load failed for download-only-081510: filestore "download-only-081510": Docker machine "download-only-081510" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0626 19:36:04.941171   14556 out.go:97] Using the kvm2 driver based on existing profile
	I0626 19:36:04.941203   14556 start.go:297] selected driver: kvm2
	I0626 19:36:04.941208   14556 start.go:954] validating driver "kvm2" against &{Name:download-only-081510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-081510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:36:04.941592   14556 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:36:04.941656   14556 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16761-7242/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0626 19:36:04.955508   14556 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0626 19:36:04.956192   14556 cni.go:84] Creating CNI manager for ""
	I0626 19:36:04.956211   14556 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0626 19:36:04.956218   14556 start_flags.go:319] config:
	{Name:download-only-081510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-081510 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:36:04.956350   14556 iso.go:125] acquiring lock: {Name:mkcdf247d6d78baf4b08f9f42de5d66e8ec3e8ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:36:04.958153   14556 out.go:97] Starting control plane node download-only-081510 in cluster download-only-081510
	I0626 19:36:04.958168   14556 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:36:05.467742   14556 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 19:36:05.467782   14556 cache.go:57] Caching tarball of preloaded images
	I0626 19:36:05.467950   14556 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:36:05.469862   14556 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0626 19:36:05.469875   14556 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:36:05.578788   14556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:36a3ccedce25b36b9ffc5201ce124dec -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 19:36:18.877996   14556 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:36:18.878087   14556 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16761-7242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0626 19:36:19.734706   14556 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 19:36:19.734832   14556 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/download-only-081510/config.json ...
	I0626 19:36:19.735015   14556 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 19:36:19.735203   14556 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16761-7242/.minikube/cache/linux/amd64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-081510"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-081510
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.52s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-462503 --alsologtostderr --binary-mirror http://127.0.0.1:39443 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-462503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-462503
--- PASS: TestBinaryMirror (0.52s)

                                                
                                    
x
+
TestOffline (129.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-623378 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-623378 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m8.018476672s)
helpers_test.go:175: Cleaning up "offline-crio-623378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-623378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-623378: (1.082530276s)
--- PASS: TestOffline (129.10s)

                                                
                                    
x
+
TestAddons/Setup (157.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-118062 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-118062 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m37.456267081s)
--- PASS: TestAddons/Setup (157.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 27.268914ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zjfg6" [b91c3b06-35cc-451a-bbef-ba61f98d3f3f] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013000231s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5bdlk" [22b2f5ee-9096-47f8-87ec-b8917bab1abe] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011445581s
addons_test.go:316: (dbg) Run:  kubectl --context addons-118062 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-118062 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-118062 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.109381173s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 ip
2023/06/26 19:39:17 [DEBUG] GET http://192.168.39.92:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l86c7" [18fb5c3b-0c4d-4f71-800b-7ed5d2ea386a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009194845s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-118062
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-118062: (6.048975995s)
--- PASS: TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 27.679721ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-944s6" [02b88f9d-a6aa-4824-b872-389e6fe198a8] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.017795618s
addons_test.go:391: (dbg) Run:  kubectl --context addons-118062 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-118062 addons disable metrics-server --alsologtostderr -v=1: (1.207111035s)
--- PASS: TestAddons/parallel/MetricsServer (6.33s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.59s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 6.651998ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-bzkls" [740b2eee-2d63-4a0c-a3ac-aa6fb6ff775c] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012316483s
addons_test.go:449: (dbg) Run:  kubectl --context addons-118062 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-118062 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.156455936s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (123.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 28.218142ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-118062 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-118062 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [78badf21-a7e8-4d17-8667-e1857d4c8f38] Pending
helpers_test.go:344: "task-pv-pod" [78badf21-a7e8-4d17-8667-e1857d4c8f38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [78badf21-a7e8-4d17-8667-e1857d4c8f38] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.013462303s
addons_test.go:560: (dbg) Run:  kubectl --context addons-118062 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118062 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-118062 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-118062 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-118062 delete pod task-pv-pod: (1.341345659s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-118062 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-118062 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-118062 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-118062 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c5dfe732-b954-4c63-8e09-73f02d5a2b04] Pending
helpers_test.go:344: "task-pv-pod-restore" [c5dfe732-b954-4c63-8e09-73f02d5a2b04] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c5dfe732-b954-4c63-8e09-73f02d5a2b04] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.017947003s
addons_test.go:602: (dbg) Run:  kubectl --context addons-118062 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-118062 delete pod task-pv-pod-restore: (1.309687233s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-118062 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-118062 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-118062 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.601585471s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-118062 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (123.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-118062 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-118062 --alsologtostderr -v=1: (1.361467352s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-22z64" [1771f198-1e08-4649-bcbd-a15cc6d44d8d] Pending
helpers_test.go:344: "headlamp-66f6498c69-22z64" [1771f198-1e08-4649-bcbd-a15cc6d44d8d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-22z64" [1771f198-1e08-4649-bcbd-a15cc6d44d8d] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-22z64" [1771f198-1e08-4649-bcbd-a15cc6d44d8d] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.01720253s
--- PASS: TestAddons/parallel/Headlamp (14.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-5grvb" [81dba13f-8a8d-4b34-a882-4346559a77b4] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010611563s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-118062
--- PASS: TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-118062 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-118062 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (81.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-778022 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-778022 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.433271062s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-778022 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-778022 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-778022 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-778022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-778022
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-778022: (1.513285844s)
--- PASS: TestCertOptions (81.42s)

                                                
                                    
x
+
TestCertExpiration (266.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-686634 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-686634 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m2.362640534s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-686634 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-686634 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (22.981946355s)
helpers_test.go:175: Cleaning up "cert-expiration-686634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-686634
--- PASS: TestCertExpiration (266.33s)

                                                
                                    
x
+
TestForceSystemdFlag (56.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-922418 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-922418 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.365724413s)
docker_test.go:126: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-922418 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-922418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-922418
--- PASS: TestForceSystemdFlag (56.57s)

                                                
                                    
x
+
TestForceSystemdEnv (54.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-166324 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-166324 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.503913787s)
helpers_test.go:175: Cleaning up "force-systemd-env-166324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-166324
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-166324: (1.367439674s)
--- PASS: TestForceSystemdEnv (54.87s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.91s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.91s)

                                                
                                    
x
+
TestErrorSpam/setup (45.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-991253 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991253 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-991253 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991253 --driver=kvm2  --container-runtime=crio: (45.558400473s)
--- PASS: TestErrorSpam/setup (45.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 stop: (2.073159028s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-991253 --log_dir /tmp/nospam-991253 stop
--- PASS: TestErrorSpam/stop (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16761-7242/.minikube/files/etc/test/nested/copy/14443/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244475 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-244475 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m40.19120233s)
--- PASS: TestFunctional/serial/StartWithProxy (100.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244475 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-244475 --alsologtostderr -v=8: (32.909709242s)
functional_test.go:659: soft start took 32.910311641s for "functional-244475" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-244475 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 cache add registry.k8s.io/pause:3.3: (1.065299851s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-244475 /tmp/TestFunctionalserialCacheCmdcacheadd_local1434667101/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cache add minikube-local-cache-test:functional-244475
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 cache add minikube-local-cache-test:functional-244475: (1.844585709s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cache delete minikube-local-cache-test:functional-244475
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-244475
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.147961ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 kubectl -- --context functional-244475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-244475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-244475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.154621301s)
functional_test.go:757: restart took 37.15473474s for "functional-244475" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-244475 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 logs: (1.343917957s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 logs --file /tmp/TestFunctionalserialLogsFileCmd1074227812/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 logs --file /tmp/TestFunctionalserialLogsFileCmd1074227812/001/logs.txt: (1.32293753s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-244475 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-244475
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-244475: exit status 115 (277.857637ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.57:31635 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-244475 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-244475 delete -f testdata/invalidsvc.yaml: (1.334468491s)
--- PASS: TestFunctional/serial/InvalidService (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 config get cpus: exit status 14 (51.018816ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 config get cpus: exit status 14 (42.624995ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244475 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244475 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21762: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.546202ms)

                                                
                                                
-- stdout --
	* [functional-244475] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 19:49:00.377689   21398 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:49:00.377829   21398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:49:00.377840   21398 out.go:309] Setting ErrFile to fd 2...
	I0626 19:49:00.377846   21398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:49:00.377973   21398 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 19:49:00.378582   21398 out.go:303] Setting JSON to false
	I0626 19:49:00.379560   21398 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1887,"bootTime":1687807053,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:49:00.379619   21398 start.go:137] virtualization: kvm guest
	I0626 19:49:00.382031   21398 out.go:177] * [functional-244475] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:49:00.383860   21398 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 19:49:00.383897   21398 notify.go:220] Checking for updates...
	I0626 19:49:00.385441   21398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:49:00.387130   21398 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:49:00.388650   21398 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:49:00.390052   21398 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 19:49:00.391558   21398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 19:49:00.393242   21398 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 19:49:00.393760   21398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:49:00.393804   21398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:49:00.409418   21398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0626 19:49:00.409806   21398 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:49:00.410404   21398 main.go:141] libmachine: Using API Version  1
	I0626 19:49:00.410430   21398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:49:00.410778   21398 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:49:00.410998   21398 main.go:141] libmachine: (functional-244475) Calling .DriverName
	I0626 19:49:00.411240   21398 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:49:00.411536   21398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:49:00.411574   21398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:49:00.425710   21398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0626 19:49:00.426124   21398 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:49:00.426697   21398 main.go:141] libmachine: Using API Version  1
	I0626 19:49:00.426722   21398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:49:00.427070   21398 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:49:00.427275   21398 main.go:141] libmachine: (functional-244475) Calling .DriverName
	I0626 19:49:00.465145   21398 out.go:177] * Using the kvm2 driver based on existing profile
	I0626 19:49:00.467243   21398 start.go:297] selected driver: kvm2
	I0626 19:49:00.467260   21398 start.go:954] validating driver "kvm2" against &{Name:functional-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:functional-244475 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.57 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:49:00.467387   21398 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 19:49:00.469966   21398 out.go:177] 
	W0626 19:49:00.471365   21398 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0626 19:49:00.472914   21398 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244475 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.260612ms)

                                                
                                                
-- stdout --
	* [functional-244475] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 19:49:00.665422   21492 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:49:00.665599   21492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:49:00.665612   21492 out.go:309] Setting ErrFile to fd 2...
	I0626 19:49:00.665620   21492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:49:00.665873   21492 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 19:49:00.666587   21492 out.go:303] Setting JSON to false
	I0626 19:49:00.667901   21492 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1888,"bootTime":1687807053,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:49:00.667981   21492 start.go:137] virtualization: kvm guest
	I0626 19:49:00.670405   21492 out.go:177] * [functional-244475] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0626 19:49:00.671825   21492 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 19:49:00.671848   21492 notify.go:220] Checking for updates...
	I0626 19:49:00.673274   21492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:49:00.674888   21492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 19:49:00.676266   21492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 19:49:00.677774   21492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 19:49:00.679513   21492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 19:49:00.681484   21492 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 19:49:00.681876   21492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:49:00.681934   21492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:49:00.697544   21492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42511
	I0626 19:49:00.697940   21492 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:49:00.698482   21492 main.go:141] libmachine: Using API Version  1
	I0626 19:49:00.698505   21492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:49:00.698909   21492 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:49:00.699105   21492 main.go:141] libmachine: (functional-244475) Calling .DriverName
	I0626 19:49:00.699328   21492 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:49:00.699690   21492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 19:49:00.699728   21492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 19:49:00.714034   21492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0626 19:49:00.714378   21492 main.go:141] libmachine: () Calling .GetVersion
	I0626 19:49:00.714829   21492 main.go:141] libmachine: Using API Version  1
	I0626 19:49:00.714852   21492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 19:49:00.715160   21492 main.go:141] libmachine: () Calling .GetMachineName
	I0626 19:49:00.715365   21492 main.go:141] libmachine: (functional-244475) Calling .DriverName
	I0626 19:49:00.748622   21492 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0626 19:49:00.749965   21492 start.go:297] selected driver: kvm2
	I0626 19:49:00.749981   21492 start.go:954] validating driver "kvm2" against &{Name:functional-244475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16703/minikube-v1.30.1-1687455737-16703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:functional-244475 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.57 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:49:00.750116   21492 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 19:49:00.752442   21492 out.go:177] 
	W0626 19:49:00.754570   21492 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0626 19:49:00.755937   21492 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-244475 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-244475 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-nfmjp" [b7738563-09b6-40bc-a7b6-662bf5febd4e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-nfmjp" [b7738563-09b6-40bc-a7b6-662bf5febd4e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 27.021997219s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.57:30172
functional_test.go:1674: http://192.168.50.57:30172: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-nfmjp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.57:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.57:30172
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d26dcc2b-db61-424e-b9b2-5ce0879652c2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013639153s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-244475 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-244475 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-244475 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-244475 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-244475 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f72ff9b7-2933-4019-a46d-e3ed52fa918b] Pending
helpers_test.go:344: "sp-pod" [f72ff9b7-2933-4019-a46d-e3ed52fa918b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f72ff9b7-2933-4019-a46d-e3ed52fa918b] Running
E0626 19:49:02.105134   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.043110727s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-244475 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-244475 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-244475 delete -f testdata/storage-provisioner/pod.yaml: (1.546852345s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-244475 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c8b51aa6-2bed-47f9-9d6f-b57cc9a5f091] Pending
helpers_test.go:344: "sp-pod" [c8b51aa6-2bed-47f9-9d6f-b57cc9a5f091] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c8b51aa6-2bed-47f9-9d6f-b57cc9a5f091] Running
E0626 19:49:21.307399   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
2023/06/26 19:49:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.011161606s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-244475 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh -n functional-244475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 cp functional-244475:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd149700115/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh -n functional-244475 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-244475 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-6z974" [72b8be68-1bd6-4ee9-9d11-e528d6f34495] Pending
helpers_test.go:344: "mysql-7db894d786-6z974" [72b8be68-1bd6-4ee9-9d11-e528d6f34495] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-6z974" [72b8be68-1bd6-4ee9-9d11-e528d6f34495] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.041095214s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244475 exec mysql-7db894d786-6z974 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-244475 exec mysql-7db894d786-6z974 -- mysql -ppassword -e "show databases;": exit status 1 (166.841512ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244475 exec mysql-7db894d786-6z974 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-244475 exec mysql-7db894d786-6z974 -- mysql -ppassword -e "show databases;": exit status 1 (485.693728ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-244475 exec mysql-7db894d786-6z974 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14443/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /etc/test/nested/copy/14443/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14443.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /etc/ssl/certs/14443.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14443.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /usr/share/ca-certificates/14443.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/144432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /etc/ssl/certs/144432.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/144432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /usr/share/ca-certificates/144432.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-244475 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh "sudo systemctl is-active docker": exit status 1 (238.157503ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh "sudo systemctl is-active containerd": exit status 1 (275.687119ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (28.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-244475 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-244475 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-p6bd6" [24789ac3-0760-408a-a0a6-b5e8386af5e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-p6bd6" [24789ac3-0760-408a-a0a6-b5e8386af5e8] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 28.031394419s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (28.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 service list -o json
functional_test.go:1493: Took "589.522864ms" to run "out/minikube-linux-amd64 -p functional-244475 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "340.530758ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "52.286678ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "432.977547ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "45.25716ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.57:32715
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdany-port1650601905/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1687808939916836653" to /tmp/TestFunctionalparallelMountCmdany-port1650601905/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1687808939916836653" to /tmp/TestFunctionalparallelMountCmdany-port1650601905/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1687808939916836653" to /tmp/TestFunctionalparallelMountCmdany-port1650601905/001/test-1687808939916836653
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.293162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh -- ls -la /mount-9p
E0626 19:49:00.824055   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 19:49:00.829762   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 19:49:00.840346   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 26 19:48 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 26 19:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 26 19:48 test-1687808939916836653
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh cat /mount-9p/test-1687808939916836653
E0626 19:49:01.143144   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-244475 replace --force -f testdata/busybox-mount-test.yaml
E0626 19:49:01.464140   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a4cdbde7-f4e5-45e8-ac51-acc250570880] Pending
helpers_test.go:344: "busybox-mount" [a4cdbde7-f4e5-45e8-ac51-acc250570880] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0626 19:49:03.385799   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [a4cdbde7-f4e5-45e8-ac51-acc250570880] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a4cdbde7-f4e5-45e8-ac51-acc250570880] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.02085886s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-244475 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdany-port1650601905/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 service hello-node --url
E0626 19:49:00.861076   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 19:49:00.901365   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 19:49:00.982355   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.57:32715
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 version -o=json --components: (1.020267967s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244475 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-244475
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-244475
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244475 image ls --format short --alsologtostderr:
I0626 19:49:23.618701   22705 out.go:296] Setting OutFile to fd 1 ...
I0626 19:49:23.618831   22705 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.618842   22705 out.go:309] Setting ErrFile to fd 2...
I0626 19:49:23.618849   22705 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.618961   22705 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
I0626 19:49:23.619445   22705 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.619532   22705 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.619880   22705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.619926   22705 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.633353   22705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
I0626 19:49:23.633747   22705 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.634788   22705 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.634820   22705 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.635256   22705 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.635430   22705 main.go:141] libmachine: (functional-244475) Calling .GetState
I0626 19:49:23.637524   22705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.637571   22705 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.651305   22705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
I0626 19:49:23.651635   22705 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.652085   22705 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.652101   22705 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.652390   22705 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.652601   22705 main.go:141] libmachine: (functional-244475) Calling .DriverName
I0626 19:49:23.652873   22705 ssh_runner.go:195] Run: systemctl --version
I0626 19:49:23.652899   22705 main.go:141] libmachine: (functional-244475) Calling .GetSSHHostname
I0626 19:49:23.655695   22705 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.656058   22705 main.go:141] libmachine: (functional-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d5:7a", ip: ""} in network mk-functional-244475: {Iface:virbr1 ExpiryTime:2023-06-26 20:45:39 +0000 UTC Type:0 Mac:52:54:00:9f:d5:7a Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:functional-244475 Clientid:01:52:54:00:9f:d5:7a}
I0626 19:49:23.656085   22705 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined IP address 192.168.50.57 and MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.656315   22705 main.go:141] libmachine: (functional-244475) Calling .GetSSHPort
I0626 19:49:23.656471   22705 main.go:141] libmachine: (functional-244475) Calling .GetSSHKeyPath
I0626 19:49:23.656638   22705 main.go:141] libmachine: (functional-244475) Calling .GetSSHUsername
I0626 19:49:23.656788   22705 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/functional-244475/id_rsa Username:docker}
I0626 19:49:23.740367   22705 ssh_runner.go:195] Run: sudo crictl images --output json
I0626 19:49:23.792332   22705 main.go:141] libmachine: Making call to close driver server
I0626 19:49:23.792346   22705 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:23.792621   22705 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:23.792683   22705 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:23.792696   22705 main.go:141] libmachine: Making call to close connection to plugin binary
I0626 19:49:23.792708   22705 main.go:141] libmachine: Making call to close driver server
I0626 19:49:23.792721   22705 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:23.792953   22705 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:23.792993   22705 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:23.793006   22705 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244475 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| docker.io/library/nginx                 | latest             | eb4a571591807 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-244475  | ffd4cfbbe753e | 34.1MB |
| docker.io/library/mysql                 | 5.7                | 2be84dd575ee2 | 588MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/minikube-local-cache-test     | functional-244475  | 34dba70238f25 | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244475 image ls --format table --alsologtostderr:
I0626 19:49:23.846356   22791 out.go:296] Setting OutFile to fd 1 ...
I0626 19:49:23.846453   22791 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.846461   22791 out.go:309] Setting ErrFile to fd 2...
I0626 19:49:23.846465   22791 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.846567   22791 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
I0626 19:49:23.847081   22791 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.847166   22791 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.847476   22791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.847519   22791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.862937   22791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42265
I0626 19:49:23.863454   22791 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.864026   22791 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.864053   22791 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.864390   22791 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.864587   22791 main.go:141] libmachine: (functional-244475) Calling .GetState
I0626 19:49:23.866403   22791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.866447   22791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.882905   22791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
I0626 19:49:23.883269   22791 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.883664   22791 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.883686   22791 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.884012   22791 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.884163   22791 main.go:141] libmachine: (functional-244475) Calling .DriverName
I0626 19:49:23.884460   22791 ssh_runner.go:195] Run: systemctl --version
I0626 19:49:23.884490   22791 main.go:141] libmachine: (functional-244475) Calling .GetSSHHostname
I0626 19:49:23.888075   22791 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.888525   22791 main.go:141] libmachine: (functional-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d5:7a", ip: ""} in network mk-functional-244475: {Iface:virbr1 ExpiryTime:2023-06-26 20:45:39 +0000 UTC Type:0 Mac:52:54:00:9f:d5:7a Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:functional-244475 Clientid:01:52:54:00:9f:d5:7a}
I0626 19:49:23.888562   22791 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined IP address 192.168.50.57 and MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.888670   22791 main.go:141] libmachine: (functional-244475) Calling .GetSSHPort
I0626 19:49:23.888811   22791 main.go:141] libmachine: (functional-244475) Calling .GetSSHKeyPath
I0626 19:49:23.888961   22791 main.go:141] libmachine: (functional-244475) Calling .GetSSHUsername
I0626 19:49:23.889090   22791 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/functional-244475/id_rsa Username:docker}
I0626 19:49:23.985629   22791 ssh_runner.go:195] Run: sudo crictl images --output json
I0626 19:49:24.044718   22791 main.go:141] libmachine: Making call to close driver server
I0626 19:49:24.044742   22791 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:24.045006   22791 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:24.045105   22791 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:24.045132   22791 main.go:141] libmachine: Making call to close connection to plugin binary
I0626 19:49:24.045152   22791 main.go:141] libmachine: Making call to close driver server
I0626 19:49:24.045168   22791 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:24.045367   22791 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:24.045392   22791 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244475 image ls --format json --alsologtostderr:
[{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"72713623"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba08055
8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1","docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"588268197"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-244475"],"size":"34114467"},{"id":"34dba70238f253543495f6c3b227431805d071ae11a768fc7081d7ef2d9111dc","repoDigests":["localhost/minikube-local-ca
che-test@sha256:2bb12cb8a80c35b880deb00bfb082d2100f3a66d976ed872943580a1f0393f58"],"repoTags":["localhost/minikube-local-cache-test:functional-244475"],"size":"3345"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manager@sha256:d3bdc2087
6edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93e
fc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["regis
try.k8s.io/pause:3.1"],"size":"746911"},{"id":"eb4a57159180767450cb8426e6367f11b999653d8f185b5e3b78a9ca30c2c31d","repoDigests":["docker.io/library/nginx@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247","docker.io/library/nginx@sha256:d2b2f2980e9ccc570e5726b56b54580f23a018b7b7314c9eaff7e5e479c78657"],"repoTags":["docker.io/library/nginx:latest"],"size":"191044354"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","registry.k8s.io/kube-apiserver@sha256:fd03335
dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244475 image ls --format json --alsologtostderr:
I0626 19:49:23.831476   22779 out.go:296] Setting OutFile to fd 1 ...
I0626 19:49:23.831590   22779 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.831627   22779 out.go:309] Setting ErrFile to fd 2...
I0626 19:49:23.831646   22779 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.831820   22779 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
I0626 19:49:23.832528   22779 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.832665   22779 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.833177   22779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.833265   22779 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.848908   22779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
I0626 19:49:23.849325   22779 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.849957   22779 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.849984   22779 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.850316   22779 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.850488   22779 main.go:141] libmachine: (functional-244475) Calling .GetState
I0626 19:49:23.852251   22779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.852287   22779 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.866803   22779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
I0626 19:49:23.867166   22779 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.867668   22779 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.867709   22779 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.868061   22779 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.868239   22779 main.go:141] libmachine: (functional-244475) Calling .DriverName
I0626 19:49:23.868443   22779 ssh_runner.go:195] Run: systemctl --version
I0626 19:49:23.868482   22779 main.go:141] libmachine: (functional-244475) Calling .GetSSHHostname
I0626 19:49:23.871179   22779 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.871725   22779 main.go:141] libmachine: (functional-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d5:7a", ip: ""} in network mk-functional-244475: {Iface:virbr1 ExpiryTime:2023-06-26 20:45:39 +0000 UTC Type:0 Mac:52:54:00:9f:d5:7a Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:functional-244475 Clientid:01:52:54:00:9f:d5:7a}
I0626 19:49:23.871762   22779 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined IP address 192.168.50.57 and MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.871948   22779 main.go:141] libmachine: (functional-244475) Calling .GetSSHPort
I0626 19:49:23.872116   22779 main.go:141] libmachine: (functional-244475) Calling .GetSSHKeyPath
I0626 19:49:23.872252   22779 main.go:141] libmachine: (functional-244475) Calling .GetSSHUsername
I0626 19:49:23.872386   22779 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/functional-244475/id_rsa Username:docker}
I0626 19:49:23.959644   22779 ssh_runner.go:195] Run: sudo crictl images --output json
I0626 19:49:24.003158   22779 main.go:141] libmachine: Making call to close driver server
I0626 19:49:24.003178   22779 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:24.003479   22779 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:24.003497   22779 main.go:141] libmachine: Making call to close connection to plugin binary
I0626 19:49:24.003513   22779 main.go:141] libmachine: Making call to close driver server
I0626 19:49:24.003523   22779 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:24.003799   22779 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:24.003825   22779 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:24.003840   22779 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244475 image ls --format yaml --alsologtostderr:
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "588268197"
- id: eb4a57159180767450cb8426e6367f11b999653d8f185b5e3b78a9ca30c2c31d
repoDigests:
- docker.io/library/nginx@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247
- docker.io/library/nginx@sha256:d2b2f2980e9ccc570e5726b56b54580f23a018b7b7314c9eaff7e5e479c78657
repoTags:
- docker.io/library/nginx:latest
size: "191044354"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 34dba70238f253543495f6c3b227431805d071ae11a768fc7081d7ef2d9111dc
repoDigests:
- localhost/minikube-local-cache-test@sha256:2bb12cb8a80c35b880deb00bfb082d2100f3a66d976ed872943580a1f0393f58
repoTags:
- localhost/minikube-local-cache-test:functional-244475
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-244475
size: "34114467"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244475 image ls --format yaml --alsologtostderr:
I0626 19:49:23.615940   22706 out.go:296] Setting OutFile to fd 1 ...
I0626 19:49:23.616066   22706 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.616077   22706 out.go:309] Setting ErrFile to fd 2...
I0626 19:49:23.616084   22706 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.616226   22706 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
I0626 19:49:23.616737   22706 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.616829   22706 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.617148   22706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.617190   22706 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.631202   22706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42541
I0626 19:49:23.631827   22706 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.632556   22706 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.632583   22706 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.632996   22706 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.633203   22706 main.go:141] libmachine: (functional-244475) Calling .GetState
I0626 19:49:23.635468   22706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.635529   22706 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.649312   22706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46025
I0626 19:49:23.649668   22706 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.650124   22706 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.650150   22706 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.650513   22706 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.650676   22706 main.go:141] libmachine: (functional-244475) Calling .DriverName
I0626 19:49:23.650868   22706 ssh_runner.go:195] Run: systemctl --version
I0626 19:49:23.650899   22706 main.go:141] libmachine: (functional-244475) Calling .GetSSHHostname
I0626 19:49:23.654003   22706 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.656753   22706 main.go:141] libmachine: (functional-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d5:7a", ip: ""} in network mk-functional-244475: {Iface:virbr1 ExpiryTime:2023-06-26 20:45:39 +0000 UTC Type:0 Mac:52:54:00:9f:d5:7a Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:functional-244475 Clientid:01:52:54:00:9f:d5:7a}
I0626 19:49:23.656785   22706 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined IP address 192.168.50.57 and MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.656909   22706 main.go:141] libmachine: (functional-244475) Calling .GetSSHPort
I0626 19:49:23.657024   22706 main.go:141] libmachine: (functional-244475) Calling .GetSSHKeyPath
I0626 19:49:23.657115   22706 main.go:141] libmachine: (functional-244475) Calling .GetSSHUsername
I0626 19:49:23.657223   22706 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/functional-244475/id_rsa Username:docker}
I0626 19:49:23.740454   22706 ssh_runner.go:195] Run: sudo crictl images --output json
I0626 19:49:23.781462   22706 main.go:141] libmachine: Making call to close driver server
I0626 19:49:23.781482   22706 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:23.781756   22706 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:23.781773   22706 main.go:141] libmachine: Making call to close connection to plugin binary
I0626 19:49:23.781789   22706 main.go:141] libmachine: Making call to close driver server
I0626 19:49:23.781797   22706 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:23.782016   22706 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:23.782064   22706 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:23.782077   22706 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh pgrep buildkitd: exit status 1 (217.955815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image build -t localhost/my-image:functional-244475 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image build -t localhost/my-image:functional-244475 testdata/build --alsologtostderr: (3.455991907s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244475 image build -t localhost/my-image:functional-244475 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 00eb01459e7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-244475
--> eeea3217d57
Successfully tagged localhost/my-image:functional-244475
eeea3217d57f6882ea8c95c13170f7f32234365cb2320ca66a47a16a947932e9
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244475 image build -t localhost/my-image:functional-244475 testdata/build --alsologtostderr:
I0626 19:49:23.841181   22785 out.go:296] Setting OutFile to fd 1 ...
I0626 19:49:23.841349   22785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.841361   22785 out.go:309] Setting ErrFile to fd 2...
I0626 19:49:23.841366   22785 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 19:49:23.841500   22785 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
I0626 19:49:23.842042   22785 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.842692   22785 config.go:182] Loaded profile config "functional-244475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 19:49:23.843203   22785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.843244   22785 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.858868   22785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
I0626 19:49:23.859256   22785 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.859837   22785 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.859862   22785 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.860163   22785 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.860374   22785 main.go:141] libmachine: (functional-244475) Calling .GetState
I0626 19:49:23.862516   22785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0626 19:49:23.862569   22785 main.go:141] libmachine: Launching plugin server for driver kvm2
I0626 19:49:23.878209   22785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37629
I0626 19:49:23.878585   22785 main.go:141] libmachine: () Calling .GetVersion
I0626 19:49:23.879067   22785 main.go:141] libmachine: Using API Version  1
I0626 19:49:23.879086   22785 main.go:141] libmachine: () Calling .SetConfigRaw
I0626 19:49:23.879395   22785 main.go:141] libmachine: () Calling .GetMachineName
I0626 19:49:23.879769   22785 main.go:141] libmachine: (functional-244475) Calling .DriverName
I0626 19:49:23.879946   22785 ssh_runner.go:195] Run: systemctl --version
I0626 19:49:23.879973   22785 main.go:141] libmachine: (functional-244475) Calling .GetSSHHostname
I0626 19:49:23.886171   22785 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.886529   22785 main.go:141] libmachine: (functional-244475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:d5:7a", ip: ""} in network mk-functional-244475: {Iface:virbr1 ExpiryTime:2023-06-26 20:45:39 +0000 UTC Type:0 Mac:52:54:00:9f:d5:7a Iaid: IPaddr:192.168.50.57 Prefix:24 Hostname:functional-244475 Clientid:01:52:54:00:9f:d5:7a}
I0626 19:49:23.886745   22785 main.go:141] libmachine: (functional-244475) DBG | domain functional-244475 has defined IP address 192.168.50.57 and MAC address 52:54:00:9f:d5:7a in network mk-functional-244475
I0626 19:49:23.886807   22785 main.go:141] libmachine: (functional-244475) Calling .GetSSHPort
I0626 19:49:23.886960   22785 main.go:141] libmachine: (functional-244475) Calling .GetSSHKeyPath
I0626 19:49:23.887066   22785 main.go:141] libmachine: (functional-244475) Calling .GetSSHUsername
I0626 19:49:23.887191   22785 sshutil.go:53] new ssh client: &{IP:192.168.50.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/functional-244475/id_rsa Username:docker}
I0626 19:49:23.980931   22785 build_images.go:151] Building image from path: /tmp/build.23239537.tar
I0626 19:49:23.981006   22785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0626 19:49:23.993113   22785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.23239537.tar
I0626 19:49:24.003797   22785 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.23239537.tar: stat -c "%s %y" /var/lib/minikube/build/build.23239537.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.23239537.tar': No such file or directory
I0626 19:49:24.003829   22785 ssh_runner.go:362] scp /tmp/build.23239537.tar --> /var/lib/minikube/build/build.23239537.tar (3072 bytes)
I0626 19:49:24.036772   22785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.23239537
I0626 19:49:24.052020   22785 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.23239537 -xf /var/lib/minikube/build/build.23239537.tar
I0626 19:49:24.063190   22785 crio.go:297] Building image: /var/lib/minikube/build/build.23239537
I0626 19:49:24.063249   22785 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-244475 /var/lib/minikube/build/build.23239537 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0626 19:49:27.227645   22785 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-244475 /var/lib/minikube/build/build.23239537 --cgroup-manager=cgroupfs: (3.16437129s)
I0626 19:49:27.227719   22785 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.23239537
I0626 19:49:27.236445   22785 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.23239537.tar
I0626 19:49:27.245431   22785 build_images.go:207] Built localhost/my-image:functional-244475 from /tmp/build.23239537.tar
I0626 19:49:27.245453   22785 build_images.go:123] succeeded building to: functional-244475
I0626 19:49:27.245456   22785 build_images.go:124] failed building to: 
I0626 19:49:27.245482   22785 main.go:141] libmachine: Making call to close driver server
I0626 19:49:27.245498   22785 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:27.245795   22785 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:27.245823   22785 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:27.245855   22785 main.go:141] libmachine: Making call to close connection to plugin binary
I0626 19:49:27.245880   22785 main.go:141] libmachine: Making call to close driver server
I0626 19:49:27.245893   22785 main.go:141] libmachine: (functional-244475) Calling .Close
I0626 19:49:27.246110   22785 main.go:141] libmachine: Successfully made call to close driver server
I0626 19:49:27.246125   22785 main.go:141] libmachine: (functional-244475) DBG | Closing plugin on server side
I0626 19:49:27.246130   22785 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.127550705s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-244475
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image load --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr
E0626 19:49:05.946046   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image load --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr: (3.639815368s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image load --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image load --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr: (2.54928123s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdspecific-port3010706514/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.226343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh -- ls -la /mount-9p
E0626 19:49:11.067003   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdspecific-port3010706514/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244475 ssh "sudo umount -f /mount-9p": exit status 1 (228.330489ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-244475 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdspecific-port3010706514/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.338847764s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-244475
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image load --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image load --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr: (3.708906199s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217200223/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217200223/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217200223/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-244475 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217200223/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217200223/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2217200223/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image save gcr.io/google-containers/addon-resizer:functional-244475 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image save gcr.io/google-containers/addon-resizer:functional-244475 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.284284659s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image rm gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.729143127s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-244475
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-244475 image save --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-244475 image save --daemon gcr.io/google-containers/addon-resizer:functional-244475 --alsologtostderr: (2.600146363s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-244475
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-244475
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-244475
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-244475
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (120.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-759751 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0626 19:49:41.788109   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 19:50:22.749118   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-759751 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m0.871014524s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (120.87s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons enable ingress --alsologtostderr -v=5
E0626 19:51:44.669700   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons enable ingress --alsologtostderr -v=5: (18.314835471s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-759751 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-423517 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0626 19:54:52.629626   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-423517 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.936901459s)
--- PASS: TestJSONOutput/start/Command (99.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-423517 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-423517 --output=json --user=testUser
E0626 19:56:14.550587   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-423517 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-423517 --output=json --user=testUser: (7.081440451s)
--- PASS: TestJSONOutput/stop/Command (7.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-180624 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-180624 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.657808ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0a64499-cdd7-4e08-a07e-7594044042ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-180624] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad9b44d5-f951-4bb6-bd0c-63ded415dd2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16761"}}
	{"specversion":"1.0","id":"baf2f4a8-fc48-4c19-8b95-c89d1e734dc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88834b9f-11ee-4c29-8d1c-ff135f67b127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig"}}
	{"specversion":"1.0","id":"99751ff8-0db8-44a5-9b8f-3922c1cc8850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube"}}
	{"specversion":"1.0","id":"22c45fbd-2056-4bf3-b3e3-49657f6e948d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9970212b-2c01-48b2-98ff-fbaba68cf63e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97b514ea-f4c4-4089-8240-0deda46c9121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-180624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-180624
--- PASS: TestErrorJSONOutput (0.17s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (94.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-525822 --driver=kvm2  --container-runtime=crio
E0626 19:56:48.326641   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.331916   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.342197   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.362470   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.402739   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.483085   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.643513   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:48.964095   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:49.604512   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:50.885053   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:53.445519   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:56:58.566713   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-525822 --driver=kvm2  --container-runtime=crio: (45.824141062s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-528087 --driver=kvm2  --container-runtime=crio
E0626 19:57:08.807661   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 19:57:29.288286   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-528087 --driver=kvm2  --container-runtime=crio: (45.498428035s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-525822
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-528087
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-528087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-528087
helpers_test.go:175: Cleaning up "first-525822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-525822
--- PASS: TestMinikubeProfile (94.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-084753 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0626 19:58:10.248853   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-084753 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.79042796s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-084753 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-084753 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102170 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0626 19:58:30.704664   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102170 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.936232544s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102170 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102170 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-084753 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102170 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102170 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.13s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-102170
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-102170: (1.131405611s)
--- PASS: TestMountStart/serial/Stop (1.13s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102170
E0626 19:58:58.390750   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 19:59:00.824565   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102170: (22.782166073s)
--- PASS: TestMountStart/serial/RestartStopped (23.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102170 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102170 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-050558 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0626 19:59:32.170038   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-050558 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.5078472s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-050558 -- rollout status deployment/busybox: (4.47669892s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-xw4h2 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-z697w -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-xw4h2 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-z697w -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-xw4h2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-050558 -- exec busybox-67b7f59bb-z697w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-050558 -v 3 --alsologtostderr
E0626 20:01:48.326430   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-050558 -v 3 --alsologtostderr: (45.654345281s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.24s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp testdata/cp-test.txt multinode-050558:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1420814181/001/cp-test_multinode-050558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558:/home/docker/cp-test.txt multinode-050558-m02:/home/docker/cp-test_multinode-050558_multinode-050558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m02 "sudo cat /home/docker/cp-test_multinode-050558_multinode-050558-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558:/home/docker/cp-test.txt multinode-050558-m03:/home/docker/cp-test_multinode-050558_multinode-050558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m03 "sudo cat /home/docker/cp-test_multinode-050558_multinode-050558-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp testdata/cp-test.txt multinode-050558-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1420814181/001/cp-test_multinode-050558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558-m02:/home/docker/cp-test.txt multinode-050558:/home/docker/cp-test_multinode-050558-m02_multinode-050558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558 "sudo cat /home/docker/cp-test_multinode-050558-m02_multinode-050558.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558-m02:/home/docker/cp-test.txt multinode-050558-m03:/home/docker/cp-test_multinode-050558-m02_multinode-050558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m03 "sudo cat /home/docker/cp-test_multinode-050558-m02_multinode-050558-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp testdata/cp-test.txt multinode-050558-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1420814181/001/cp-test_multinode-050558-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558-m03:/home/docker/cp-test.txt multinode-050558:/home/docker/cp-test_multinode-050558-m03_multinode-050558.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558 "sudo cat /home/docker/cp-test_multinode-050558-m03_multinode-050558.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 cp multinode-050558-m03:/home/docker/cp-test.txt multinode-050558-m02:/home/docker/cp-test_multinode-050558-m03_multinode-050558-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 ssh -n multinode-050558-m02 "sudo cat /home/docker/cp-test_multinode-050558-m03_multinode-050558-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 node stop m03
E0626 20:02:16.010941   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-050558 node stop m03: (2.077104191s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-050558 status: exit status 7 (428.035281ms)

                                                
                                                
-- stdout --
	multinode-050558
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-050558-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-050558-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-050558 status --alsologtostderr: exit status 7 (438.891301ms)

                                                
                                                
-- stdout --
	multinode-050558
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-050558-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-050558-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:02:16.979297   29834 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:02:16.979396   29834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:02:16.979406   29834 out.go:309] Setting ErrFile to fd 2...
	I0626 20:02:16.979411   29834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:02:16.979524   29834 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:02:16.979669   29834 out.go:303] Setting JSON to false
	I0626 20:02:16.979694   29834 mustload.go:65] Loading cluster: multinode-050558
	I0626 20:02:16.979725   29834 notify.go:220] Checking for updates...
	I0626 20:02:16.980046   29834 config.go:182] Loaded profile config "multinode-050558": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:02:16.980063   29834 status.go:255] checking status of multinode-050558 ...
	I0626 20:02:16.980418   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:16.980498   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:16.996518   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0626 20:02:16.996889   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:16.997450   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:16.997470   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:16.997879   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:16.998112   29834 main.go:141] libmachine: (multinode-050558) Calling .GetState
	I0626 20:02:16.999638   29834 status.go:330] multinode-050558 host status = "Running" (err=<nil>)
	I0626 20:02:16.999655   29834 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:02:16.999963   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:17.000007   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:17.014819   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0626 20:02:17.015196   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:17.015651   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:17.015670   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:17.016010   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:17.016219   29834 main.go:141] libmachine: (multinode-050558) Calling .GetIP
	I0626 20:02:17.018873   29834 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:02:17.019397   29834 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:02:17.019430   29834 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:02:17.019596   29834 host.go:66] Checking if "multinode-050558" exists ...
	I0626 20:02:17.019961   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:17.020007   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:17.034388   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0626 20:02:17.034769   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:17.035278   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:17.035306   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:17.035601   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:17.035750   29834 main.go:141] libmachine: (multinode-050558) Calling .DriverName
	I0626 20:02:17.035939   29834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 20:02:17.035962   29834 main.go:141] libmachine: (multinode-050558) Calling .GetSSHHostname
	I0626 20:02:17.038622   29834 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:02:17.039021   29834 main.go:141] libmachine: (multinode-050558) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:21:4e", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 20:59:38 +0000 UTC Type:0 Mac:52:54:00:b7:21:4e Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-050558 Clientid:01:52:54:00:b7:21:4e}
	I0626 20:02:17.039054   29834 main.go:141] libmachine: (multinode-050558) DBG | domain multinode-050558 has defined IP address 192.168.39.229 and MAC address 52:54:00:b7:21:4e in network mk-multinode-050558
	I0626 20:02:17.039184   29834 main.go:141] libmachine: (multinode-050558) Calling .GetSSHPort
	I0626 20:02:17.039371   29834 main.go:141] libmachine: (multinode-050558) Calling .GetSSHKeyPath
	I0626 20:02:17.039539   29834 main.go:141] libmachine: (multinode-050558) Calling .GetSSHUsername
	I0626 20:02:17.039731   29834 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558/id_rsa Username:docker}
	I0626 20:02:17.137413   29834 ssh_runner.go:195] Run: systemctl --version
	I0626 20:02:17.144632   29834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:02:17.159036   29834 kubeconfig.go:92] found "multinode-050558" server: "https://192.168.39.229:8443"
	I0626 20:02:17.159064   29834 api_server.go:166] Checking apiserver status ...
	I0626 20:02:17.159095   29834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 20:02:17.172810   29834 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1112/cgroup
	I0626 20:02:17.182418   29834 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod3bf9120f8ca60da96af0ed761aeff36b/crio-f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f"
	I0626 20:02:17.182470   29834 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3bf9120f8ca60da96af0ed761aeff36b/crio-f74a9c2e5ef75f326750cecfa145b3c756cb6047d98a8925617bfa1da6846d0f/freezer.state
	I0626 20:02:17.191883   29834 api_server.go:204] freezer state: "THAWED"
	I0626 20:02:17.191902   29834 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0626 20:02:17.197165   29834 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0626 20:02:17.197183   29834 status.go:421] multinode-050558 apiserver status = Running (err=<nil>)
	I0626 20:02:17.197190   29834 status.go:257] multinode-050558 status: &{Name:multinode-050558 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0626 20:02:17.197203   29834 status.go:255] checking status of multinode-050558-m02 ...
	I0626 20:02:17.197498   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:17.197521   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:17.215232   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0626 20:02:17.215564   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:17.216026   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:17.216045   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:17.216324   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:17.216518   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .GetState
	I0626 20:02:17.218037   29834 status.go:330] multinode-050558-m02 host status = "Running" (err=<nil>)
	I0626 20:02:17.218059   29834 host.go:66] Checking if "multinode-050558-m02" exists ...
	I0626 20:02:17.218344   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:17.218386   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:17.232178   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0626 20:02:17.232540   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:17.232983   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:17.233001   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:17.233288   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:17.233471   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .GetIP
	I0626 20:02:17.236124   29834 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:02:17.236624   29834 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:02:17.236648   29834 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:02:17.236819   29834 host.go:66] Checking if "multinode-050558-m02" exists ...
	I0626 20:02:17.237107   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:17.237138   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:17.250861   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0626 20:02:17.251214   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:17.251666   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:17.251685   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:17.251988   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:17.252173   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .DriverName
	I0626 20:02:17.252343   29834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 20:02:17.252375   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHHostname
	I0626 20:02:17.254687   29834 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:02:17.255037   29834 main.go:141] libmachine: (multinode-050558-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:03:c9", ip: ""} in network mk-multinode-050558: {Iface:virbr1 ExpiryTime:2023-06-26 21:00:42 +0000 UTC Type:0 Mac:52:54:00:86:03:c9 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:multinode-050558-m02 Clientid:01:52:54:00:86:03:c9}
	I0626 20:02:17.255073   29834 main.go:141] libmachine: (multinode-050558-m02) DBG | domain multinode-050558-m02 has defined IP address 192.168.39.133 and MAC address 52:54:00:86:03:c9 in network mk-multinode-050558
	I0626 20:02:17.255213   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHPort
	I0626 20:02:17.255376   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHKeyPath
	I0626 20:02:17.255605   29834 main.go:141] libmachine: (multinode-050558-m02) Calling .GetSSHUsername
	I0626 20:02:17.255732   29834 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16761-7242/.minikube/machines/multinode-050558-m02/id_rsa Username:docker}
	I0626 20:02:17.348634   29834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 20:02:17.361288   29834 status.go:257] multinode-050558-m02 status: &{Name:multinode-050558-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0626 20:02:17.361320   29834 status.go:255] checking status of multinode-050558-m03 ...
	I0626 20:02:17.361664   29834 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0626 20:02:17.361694   29834 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0626 20:02:17.376407   29834 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I0626 20:02:17.376834   29834 main.go:141] libmachine: () Calling .GetVersion
	I0626 20:02:17.377572   29834 main.go:141] libmachine: Using API Version  1
	I0626 20:02:17.377613   29834 main.go:141] libmachine: () Calling .SetConfigRaw
	I0626 20:02:17.377976   29834 main.go:141] libmachine: () Calling .GetMachineName
	I0626 20:02:17.378231   29834 main.go:141] libmachine: (multinode-050558-m03) Calling .GetState
	I0626 20:02:17.379845   29834 status.go:330] multinode-050558-m03 host status = "Stopped" (err=<nil>)
	I0626 20:02:17.379857   29834 status.go:343] host is not running, skipping remaining checks
	I0626 20:02:17.379863   29834 status.go:257] multinode-050558-m03 status: &{Name:multinode-050558-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.95s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (33.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-050558 node start m03 --alsologtostderr: (32.483278573s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (33.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-050558 node delete m03: (1.198293464s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (440.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-050558 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0626 20:16:48.328046   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:18:30.705078   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:19:00.824593   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 20:21:48.326674   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:22:03.873345   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 20:23:30.705466   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-050558 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m20.005048913s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-050558 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (440.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-050558
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-050558-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-050558-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.608191ms)

                                                
                                                
-- stdout --
	* [multinode-050558-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-050558-m02' is duplicated with machine name 'multinode-050558-m02' in profile 'multinode-050558'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-050558-m03 --driver=kvm2  --container-runtime=crio
E0626 20:24:00.824370   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-050558-m03 --driver=kvm2  --container-runtime=crio: (49.891265377s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-050558
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-050558: exit status 80 (222.832012ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-050558
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-050558-m03 already exists in multinode-050558-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-050558-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.18s)

                                                
                                    
x
+
TestScheduledStopUnix (118.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-095377 --memory=2048 --driver=kvm2  --container-runtime=crio
E0626 20:29:51.371751   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-095377 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.613654192s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095377 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-095377 -n scheduled-stop-095377
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095377 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095377 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095377 -n scheduled-stop-095377
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095377
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095377 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095377
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-095377: exit status 7 (62.769258ms)

                                                
                                                
-- stdout --
	scheduled-stop-095377
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095377 -n scheduled-stop-095377
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095377 -n scheduled-stop-095377: exit status 7 (55.971054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-095377
--- PASS: TestScheduledStopUnix (118.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (216.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.938355623s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-598461
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-598461: (7.105453819s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-598461 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-598461 status --format={{.Host}}: exit status 7 (63.701244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0626 20:33:30.704678   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.292650552s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-598461 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.078552ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-598461] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-598461
	    minikube start -p kubernetes-upgrade-598461 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5984612 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-598461 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-598461 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.985991138s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-598461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-598461
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-598461: (1.22187156s)
--- PASS: TestKubernetesUpgrade (216.78s)

                                                
                                    
x
+
TestPause/serial/Start (121.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-781867 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-781867 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m1.444609235s)
--- PASS: TestPause/serial/Start (121.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-606105 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-606105 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.404351ms)

                                                
                                                
-- stdout --
	* [false-606105] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 20:31:37.570895   38135 out.go:296] Setting OutFile to fd 1 ...
	I0626 20:31:37.571007   38135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:31:37.571012   38135 out.go:309] Setting ErrFile to fd 2...
	I0626 20:31:37.571016   38135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 20:31:37.571121   38135 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-7242/.minikube/bin
	I0626 20:31:37.571675   38135 out.go:303] Setting JSON to false
	I0626 20:31:37.572650   38135 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4445,"bootTime":1687807053,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 20:31:37.572712   38135 start.go:137] virtualization: kvm guest
	I0626 20:31:37.575216   38135 out.go:177] * [false-606105] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 20:31:37.577022   38135 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 20:31:37.577017   38135 notify.go:220] Checking for updates...
	I0626 20:31:37.579042   38135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 20:31:37.580695   38135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	I0626 20:31:37.583626   38135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	I0626 20:31:37.585402   38135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 20:31:37.587091   38135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 20:31:37.589586   38135 config.go:182] Loaded profile config "kubernetes-upgrade-598461": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0626 20:31:37.589697   38135 config.go:182] Loaded profile config "offline-crio-623378": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:31:37.589800   38135 config.go:182] Loaded profile config "pause-781867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 20:31:37.589901   38135 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 20:31:37.628618   38135 out.go:177] * Using the kvm2 driver based on user configuration
	I0626 20:31:37.630318   38135 start.go:297] selected driver: kvm2
	I0626 20:31:37.630337   38135 start.go:954] validating driver "kvm2" against <nil>
	I0626 20:31:37.630350   38135 start.go:965] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 20:31:37.632473   38135 out.go:177] 
	W0626 20:31:37.634013   38135 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0626 20:31:37.635646   38135 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-606105 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-606105" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-606105

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-606105"

                                                
                                                
----------------------- debugLogs end: false-606105 [took: 2.526703694s] --------------------------------
helpers_test.go:175: Cleaning up "false-606105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-606105
--- PASS: TestNetworkPlugins/group/false (2.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480285 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-480285 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (59.017216ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-480285] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-7242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-7242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (123.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480285 --driver=kvm2  --container-runtime=crio
E0626 20:31:48.326604   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480285 --driver=kvm2  --container-runtime=crio: (2m3.258279124s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-480285 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (123.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-781867 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-781867 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.789351762s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480285 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0626 20:34:00.824102   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480285 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.67385512s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-480285 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-480285 status -o json: exit status 2 (282.871198ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-480285","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-480285
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.95s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-781867 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-781867 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-781867 --output=json --layout=cluster: exit status 2 (268.602236ms)

                                                
                                                
-- stdout --
	{"Name":"pause-781867","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-781867","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-781867 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-781867 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-781867 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-781867 --alsologtostderr -v=5: (1.831153611s)
--- PASS: TestPause/serial/DeletePaused (1.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (56.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480285 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480285 --no-kubernetes --driver=kvm2  --container-runtime=crio: (56.589849072s)
--- PASS: TestNoKubernetes/serial/Start (56.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-480285 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-480285 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.576358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-480285
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-480285: (1.17573298s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-490377 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-490377 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m9.255378764s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490377 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [422a1295-6f28-4deb-ba96-15e3e1caae5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [422a1295-6f28-4deb-ba96-15e3e1caae5a] Running
E0626 20:39:00.823822   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.030026772s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490377 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-490377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-490377 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-123924
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-934450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-934450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m25.291594633s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (125.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-299839 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-299839 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (2m5.198862902s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (125.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-473235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-473235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (2m15.436645536s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-934450 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9aa65fd1-3b50-4e42-bcfc-f5557dc491cd] Pending
helpers_test.go:344: "busybox" [9aa65fd1-3b50-4e42-bcfc-f5557dc491cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9aa65fd1-3b50-4e42-bcfc-f5557dc491cd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.036812348s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-934450 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-934450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-934450 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.108089774s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-934450 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-299839 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e052b472-8ed7-4775-961c-38f01513b0d4] Pending
helpers_test.go:344: "busybox" [e052b472-8ed7-4775-961c-38f01513b0d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e052b472-8ed7-4775-961c-38f01513b0d4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.024471109s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-299839 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (792.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-490377 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-490377 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m12.537780636s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490377 -n old-k8s-version-490377
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (792.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-299839 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-299839 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f284b8a-d7f1-474e-a694-c635bfcf0a18] Pending
helpers_test.go:344: "busybox" [0f284b8a-d7f1-474e-a694-c635bfcf0a18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f284b8a-d7f1-474e-a694-c635bfcf0a18] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.040304553s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-473235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-473235 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (800.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-934450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 20:43:30.704653   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-934450 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (13m20.347736853s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-934450 -n no-preload-934450
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (800.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (760.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-299839 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-299839 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (12m40.603015984s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-299839 -n embed-certs-299839
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (760.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (498.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-473235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 20:46:31.371920   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:46:48.327090   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
E0626 20:48:30.705355   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 20:49:00.824102   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 20:51:48.326785   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/ingress-addon-legacy-759751/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-473235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (8m17.754644151s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-473235 -n default-k8s-diff-port-473235
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (498.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-421460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-421460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m4.058719737s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-421460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-421460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.488589876s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-421460 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-421460 --alsologtostderr -v=3: (12.110712617s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-421460 -n newest-cni-421460
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-421460 -n newest-cni-421460: exit status 7 (61.633336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-421460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-421460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 21:08:30.705532   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/functional-244475/client.crt: no such file or directory
E0626 21:08:54.667533   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:54.672731   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:54.683394   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:54.703713   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:54.744084   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:54.824480   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:54.985636   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:55.305942   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:55.946664   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:08:57.227838   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-421460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (51.487052341s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-421460 -n newest-cni-421460
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-421460 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-421460 --alsologtostderr -v=1
E0626 21:08:59.788582   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-421460 -n newest-cni-421460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-421460 -n newest-cni-421460: exit status 2 (246.593981ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-421460 -n newest-cni-421460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-421460 -n newest-cni-421460: exit status 2 (238.834415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-421460 --alsologtostderr -v=1
E0626 21:09:00.824724   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-421460 -n newest-cni-421460
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-421460 -n newest-cni-421460
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0626 21:09:04.909421   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
E0626 21:09:15.150198   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m40.574553958s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m16.043913588s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (123.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0626 21:09:35.630461   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m3.1945224s)
--- PASS: TestNetworkPlugins/group/calico/Start (123.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (128.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0626 21:10:16.591476   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m8.502041472s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (128.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-82lwp" [3fe62ce7-fac5-47eb-ba82-4c81ee6d12e6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.026169319s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-q8dgz" [85581a77-404a-416a-b867-d2144cc25e7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-q8dgz" [85581a77-404a-416a-b867-d2144cc25e7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.01060669s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-k6qk4" [94c5c934-fc5c-49be-944f-271e38d89a47] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0626 21:10:44.571491   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:44.576756   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:44.587072   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:44.607327   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:44.647655   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:44.728177   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:44.888852   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:45.209228   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:45.849738   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:47.130278   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:10:49.690816   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-k6qk4" [94c5c934-fc5c-49be-944f-271e38d89a47] Running
E0626 21:10:54.812023   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.010821203s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m43.968610452s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (114.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0626 21:11:25.562213   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m54.18642421s)
--- PASS: TestNetworkPlugins/group/flannel/Start (114.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-b6w8k" [76675010-d85a-4917-bf4f-c4f6ec6f44ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021879118s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-r2ql7" [c35966a9-237b-49a7-a5b3-465e0f8ad0a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0626 21:11:38.512028   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-r2ql7" [c35966a9-237b-49a7-a5b3-465e0f8ad0a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.011011786s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pmnxd" [76da5868-f4a6-415d-8d39-dfe5814aa7b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0626 21:11:53.803498   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:53.809654   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:53.819941   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:53.841026   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:53.881217   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:53.965494   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:54.129507   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:54.450867   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:55.091352   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:11:56.389105   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-pmnxd" [76da5868-f4a6-415d-8d39-dfe5814aa7b2] Running
E0626 21:11:58.949661   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.009261781s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (105.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0626 21:12:03.874877   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
E0626 21:12:04.070104   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
E0626 21:12:06.522788   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/no-preload-934450/client.crt: no such file or directory
E0626 21:12:14.310206   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-606105 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m45.580649339s)
--- PASS: TestNetworkPlugins/group/bridge/Start (105.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-k5r9j" [5d5a9b33-0eaf-42a1-9994-f4608ab89c02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-k5r9j" [5d5a9b33-0eaf-42a1-9994-f4608ab89c02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010188361s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qv4sr" [76228987-d271-48fb-8da2-aa7da70127d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023074357s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-862fv" [3a7369ea-1062-48c9-aacd-c4ef15fe7f8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0626 21:13:15.751507   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/default-k8s-diff-port-473235/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-862fv" [3a7369ea-1062-48c9-aacd-c4ef15fe7f8d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.007643566s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-606105 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-606105 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-h9fqp" [e4f03d4c-4c55-46a7-abb3-56e852b58be4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0626 21:13:54.666822   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/old-k8s-version-490377/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-h9fqp" [e4f03d4c-4c55-46a7-abb3-56e852b58be4] Running
E0626 21:14:00.824062   14443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-7242/.minikube/profiles/addons-118062/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006562038s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-606105 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-606105 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (35/292)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.27.3/cached-images 0
13 TestDownloadOnly/v1.27.3/binaries 0
14 TestDownloadOnly/v1.27.3/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
43 TestHyperKitDriverInstallOrUpdate 0
44 TestHyperkitDriverSkipUpgrade 0
95 TestFunctional/parallel/DockerEnv 0
96 TestFunctional/parallel/PodmanEnv 0
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
107 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
108 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
109 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
110 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
111 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
144 TestGvisorAddon 0
145 TestImageBuild 0
178 TestKicCustomNetwork 0
179 TestKicExistingNetwork 0
180 TestKicCustomSubnet 0
181 TestKicStaticIP 0
212 TestChangeNoneUser 0
215 TestScheduledStopWindows 0
217 TestSkaffold 0
219 TestInsufficientStorage 0
223 TestMissingContainerUpgrade 0
230 TestStartStop/group/disable-driver-mounts 0.14
234 TestNetworkPlugins/group/kubenet 2.72
243 TestNetworkPlugins/group/cilium 2.97
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-603225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-603225
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-606105 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-606105" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-606105

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-606105"

                                                
                                                
----------------------- debugLogs end: kubenet-606105 [took: 2.577384675s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-606105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-606105
--- SKIP: TestNetworkPlugins/group/kubenet (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-606105 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-606105" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-606105

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-606105" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-606105"

                                                
                                                
----------------------- debugLogs end: cilium-606105 [took: 2.840631095s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-606105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-606105
--- SKIP: TestNetworkPlugins/group/cilium (2.97s)

                                                
                                    
Copied to clipboard